Test Report: Docker_macOS 14079

                    
                      bc7278193255a66f30064dc56185dbbc87656da8:2022-05-31:24200
                    
                

Test fail (22/288)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (304.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220531101620-2169 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:910: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220531101620-2169 --alsologtostderr -v=1] ...
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220531101620-2169 --alsologtostderr -v=1] stdout:
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220531101620-2169 --alsologtostderr -v=1] stderr:
I0531 10:18:59.205351    4053 out.go:296] Setting OutFile to fd 1 ...
I0531 10:18:59.205562    4053 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 10:18:59.205567    4053 out.go:309] Setting ErrFile to fd 2...
I0531 10:18:59.205572    4053 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 10:18:59.205669    4053 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
I0531 10:18:59.205855    4053 mustload.go:65] Loading cluster: functional-20220531101620-2169
I0531 10:18:59.206146    4053 config.go:178] Loaded profile config "functional-20220531101620-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0531 10:18:59.206495    4053 cli_runner.go:164] Run: docker container inspect functional-20220531101620-2169 --format={{.State.Status}}
I0531 10:18:59.275003    4053 host.go:66] Checking if "functional-20220531101620-2169" exists ...
I0531 10:18:59.275273    4053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220531101620-2169
I0531 10:18:59.343586    4053 api_server.go:165] Checking apiserver status ...
I0531 10:18:59.343706    4053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0531 10:18:59.343792    4053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220531101620-2169
I0531 10:18:59.413749    4053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50985 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/functional-20220531101620-2169/id_rsa Username:docker}
I0531 10:18:59.498640    4053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5923/cgroup
W0531 10:18:59.506440    4053 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5923/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0531 10:18:59.506458    4053 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50989/healthz ...
I0531 10:18:59.511942    4053 api_server.go:266] https://127.0.0.1:50989/healthz returned 200:
ok
W0531 10:18:59.511972    4053 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0531 10:18:59.512121    4053 config.go:178] Loaded profile config "functional-20220531101620-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0531 10:18:59.512132    4053 addons.go:65] Setting dashboard=true in profile "functional-20220531101620-2169"
I0531 10:18:59.512141    4053 addons.go:153] Setting addon dashboard=true in "functional-20220531101620-2169"
I0531 10:18:59.512158    4053 host.go:66] Checking if "functional-20220531101620-2169" exists ...
I0531 10:18:59.512455    4053 cli_runner.go:164] Run: docker container inspect functional-20220531101620-2169 --format={{.State.Status}}
I0531 10:18:59.628740    4053 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
I0531 10:18:59.670560    4053 out.go:177]   - Using image kubernetesui/metrics-scraper:v1.0.7
I0531 10:18:59.691783    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0531 10:18:59.691800    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0531 10:18:59.691864    4053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220531101620-2169
I0531 10:18:59.760424    4053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50985 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/functional-20220531101620-2169/id_rsa Username:docker}
I0531 10:18:59.850682    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0531 10:18:59.850697    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0531 10:18:59.864556    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0531 10:18:59.864568    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0531 10:18:59.878667    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0531 10:18:59.878682    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0531 10:18:59.892058    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0531 10:18:59.892072    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0531 10:18:59.905367    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
I0531 10:18:59.905379    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0531 10:18:59.920268    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0531 10:18:59.920280    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0531 10:18:59.935168    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0531 10:18:59.935180    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0531 10:18:59.948701    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0531 10:18:59.948712    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0531 10:18:59.962805    4053 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0531 10:18:59.962818    4053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0531 10:18:59.978350    4053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0531 10:19:00.344984    4053 addons.go:116] Writing out "functional-20220531101620-2169" config to set dashboard=true...
W0531 10:19:00.345313    4053 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0531 10:19:00.345981    4053 kapi.go:59] client config for functional-20220531101620-2169: &rest.Config{Host:"https://127.0.0.1:50989", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-
2169/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22c2180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0531 10:19:00.354566    4053 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  9efe29b4-e0fc-49ca-951b-bee88080fa49 832 0 2022-05-31 10:19:00 -0700 PDT <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-05-31 10:19:00 -0700 PDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.230.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.230.21],IPFamilies:[IPv4],AllocateLoadBalanc
erNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0531 10:19:00.354676    4053 out.go:239] * Launching proxy ...
* Launching proxy ...
I0531 10:19:00.354746    4053 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220531101620-2169 proxy --port 36195]
I0531 10:19:00.357122    4053 dashboard.go:157] Waiting for kubectl to output host:port ...
I0531 10:19:00.386934    4053 dashboard.go:175] proxy stdout: Starting to serv  on 127.0.0 1:36195
W0531 10:19:00.386988    4053 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0531 10:19:00.387004    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.387048    4053 retry.go:31] will retry after 110.466µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.387233    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.387259    4053 retry.go:31] will retry after 216.077µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.387553    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.387566    4053 retry.go:31] will retry after 262.026µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.387945    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.387982    4053 retry.go:31] will retry after 316.478µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.388410    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.388424    4053 retry.go:31] will retry after 468.098µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.389040    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.389057    4053 retry.go:31] will retry after 901.244µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.390004    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.390023    4053 retry.go:31] will retry after 644.295µs: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.390863    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.390885    4053 retry.go:31] will retry after 1.121724ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.392133    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.392151    4053 retry.go:31] will retry after 1.529966ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.393784    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.393807    4053 retry.go:31] will retry after 3.078972ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.397705    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.397766    4053 retry.go:31] will retry after 5.854223ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.404112    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.404158    4053 retry.go:31] will retry after 11.362655ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.415959    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.415994    4053 retry.go:31] will retry after 9.267303ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.427304    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.427344    4053 retry.go:31] will retry after 17.139291ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.445282    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.445346    4053 retry.go:31] will retry after 23.881489ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.469306    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.469357    4053 retry.go:31] will retry after 42.427055ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.512577    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.512638    4053 retry.go:31] will retry after 51.432832ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.564569    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.564617    4053 retry.go:31] will retry after 78.14118ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.644638    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.644671    4053 retry.go:31] will retry after 174.255803ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.819055    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.819127    4053 retry.go:31] will retry after 159.291408ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:00.978498    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:00.978537    4053 retry.go:31] will retry after 233.827468ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:01.212722    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:01.212760    4053 retry.go:31] will retry after 429.392365ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:01.643534    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:01.643585    4053 retry.go:31] will retry after 801.058534ms: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:02.446687    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:02.446740    4053 retry.go:31] will retry after 1.529087469s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:03.976253    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:03.976292    4053 retry.go:31] will retry after 1.335136154s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:05.313620    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:05.313699    4053 retry.go:31] will retry after 2.012724691s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:07.326473    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:07.326513    4053 retry.go:31] will retry after 4.744335389s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:12.070936    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:12.071015    4053 retry.go:31] will retry after 4.014454686s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:16.087568    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:16.087694    4053 retry.go:31] will retry after 11.635741654s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:27.723649    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:27.723776    4053 retry.go:31] will retry after 15.298130033s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:19:43.022547    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:19:43.022620    4053 retry.go:31] will retry after 19.631844237s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:20:02.656369    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:20:02.656474    4053 retry.go:31] will retry after 15.195386994s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:20:17.853273    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:20:17.853318    4053 retry.go:31] will retry after 28.402880652s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:20:46.258070    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:20:46.258122    4053 retry.go:31] will retry after 1m6.435206373s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:21:52.693275    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:21:52.693361    4053 retry.go:31] will retry after 1m28.514497132s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:23:21.207122    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:23:21.207201    4053 retry.go:31] will retry after 34.767217402s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0531 10:23:55.974763    4053 dashboard.go:212] http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0531 10:23:55.974831    4053 retry.go:31] will retry after 1m5.688515861s: checkURL: parse "http://127.0.0 1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220531101620-2169
helpers_test.go:235: (dbg) docker inspect functional-20220531101620-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684",
	        "Created": "2022-05-31T17:16:26.791826077Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:16:27.092829368Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684/b1a56dd67e52206edf93f1572fd85758d91f03693e87f0ffe37bc1f0ae506684-json.log",
	        "Name": "/functional-20220531101620-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220531101620-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220531101620-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ec5b0070ea6b55c60d4fa84164db18c03a12824e0cc578ee15ea3629f2a0189-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ec5b0070ea6b55c60d4fa84164db18c03a12824e0cc578ee15ea3629f2a0189/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ec5b0070ea6b55c60d4fa84164db18c03a12824e0cc578ee15ea3629f2a0189/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ec5b0070ea6b55c60d4fa84164db18c03a12824e0cc578ee15ea3629f2a0189/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220531101620-2169",
	                "Source": "/var/lib/docker/volumes/functional-20220531101620-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220531101620-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220531101620-2169",
	                "name.minikube.sigs.k8s.io": "functional-20220531101620-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6905b8a83cfe7359d35d000619de0e3fda0b58a2af8c5557262cbe0ad287bf41",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50985"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50986"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50988"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50989"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6905b8a83cfe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220531101620-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1a56dd67e52",
	                        "functional-20220531101620-2169"
	                    ],
	                    "NetworkID": "b0779ab203a89fdc87b897400ac0df06f4eebf10cdb865cc91d5f8f0631ec723",
	                    "EndpointID": "9d85ed0cc9e9379f7fcd33e1a5cc0247fc194e548c79fddac5b81bd9dc4a40f4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220531101620-2169 -n functional-20220531101620-2169
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs -n 25: (3.382020163s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                       Args                        |            Profile             |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
	| profile        | list -o json                                      | minikube                       | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	| profile        | list -o json --light                              | minikube                       | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	| addons         | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | addons list                                       |                                |         |                |                     |                     |
	| addons         | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | addons list -o json                               |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh findmnt -T /mount-9p |                        |                                |         |                |                     |                     |
	|                | grep 9p                                           |                                |         |                |                     |                     |
	| service        | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | service list                                      |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh -- ls -la /mount-9p                           |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh cat                                           |                                |         |                |                     |                     |
	|                | /mount-9p/test-1654017528221361000                |                                |         |                |                     |                     |
	| service        | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | service --namespace=default                       |                                |         |                |                     |                     |
	|                | --https --url hello-node                          |                                |         |                |                     |                     |
	| service        | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | service hello-node --url                          |                                |         |                |                     |                     |
	|                | --format={{.IP}}                                  |                                |         |                |                     |                     |
	| service        | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | service hello-node --url                          |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh stat                                          |                                |         |                |                     |                     |
	|                | /mount-9p/created-by-test                         |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh stat                                          |                                |         |                |                     |                     |
	|                | /mount-9p/created-by-pod                          |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh sudo umount -f /mount-9p                      |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh findmnt -T /mount-9p |                        |                                |         |                |                     |                     |
	|                | grep 9p                                           |                                |         |                |                     |                     |
	| ssh            | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:18 PDT | 31 May 22 10:18 PDT |
	|                | ssh -- ls -la /mount-9p                           |                                |         |                |                     |                     |
	| update-context | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | update-context                                    |                                |         |                |                     |                     |
	|                | --alsologtostderr -v=2                            |                                |         |                |                     |                     |
	| update-context | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | update-context                                    |                                |         |                |                     |                     |
	|                | --alsologtostderr -v=2                            |                                |         |                |                     |                     |
	| update-context | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | update-context                                    |                                |         |                |                     |                     |
	|                | --alsologtostderr -v=2                            |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | image ls --format short                           |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | image ls --format yaml                            |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169 image build -t     | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | localhost/my-image:functional-20220531101620-2169 |                                |         |                |                     |                     |
	|                | testdata/build                                    |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | image ls                                          |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | image ls --format json                            |                                |         |                |                     |                     |
	| image          | functional-20220531101620-2169                    | functional-20220531101620-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:19 PDT | 31 May 22 10:19 PDT |
	|                | image ls --format table                           |                                |         |                |                     |                     |
	|----------------|---------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 10:18:58
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 10:18:58.380999    4021 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:18:58.381189    4021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:58.381194    4021 out.go:309] Setting ErrFile to fd 2...
	I0531 10:18:58.381198    4021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:58.381304    4021 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:18:58.381560    4021 out.go:303] Setting JSON to false
	I0531 10:18:58.397260    4021 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1107,"bootTime":1654016431,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:18:58.397369    4021 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:18:58.419311    4021 out.go:177] * [functional-20220531101620-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:18:58.481943    4021 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:18:58.524377    4021 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:18:58.567058    4021 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:18:58.608902    4021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:18:58.667383    4021 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:18:58.705549    4021 config.go:178] Loaded profile config "functional-20220531101620-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:18:58.706065    4021 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:18:58.781359    4021 docker.go:137] docker version: linux-20.10.14
	I0531 10:18:58.781528    4021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:18:58.909290    4021 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:53 SystemTime:2022-05-31 17:18:58.846853666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:18:58.931133    4021 out.go:177] * Using the docker driver based on existing profile
	I0531 10:18:58.951882    4021 start.go:284] selected driver: docker
	I0531 10:18:58.951901    4021 start.go:806] validating driver "docker" against &{Name:functional-20220531101620-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531101620-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:18:58.952056    4021 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:18:58.952352    4021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:18:59.084033    4021 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:18:59.018331196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:18:59.086154    4021 cni.go:95] Creating CNI manager for ""
	I0531 10:18:59.086174    4021 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:18:59.086191    4021 start_flags.go:306] config:
	{Name:functional-20220531101620-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531101620-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:18:59.128378    4021 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 17:16:27 UTC, end at Tue 2022-05-31 17:24:00 UTC. --
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.025207501Z" level=info msg="ignoring event" container=20bc583707582181d11d651e1981ac85d84ab7a397e2f077d431730d515c3995 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.026370484Z" level=info msg="ignoring event" container=7d5151ec1af110b4dd77fed63ad1093d171d14438efbbe563bf6851284463cbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.100743094Z" level=info msg="ignoring event" container=3eb547129080937ca043f53f2197e32ec0bf292d3f1e1d59ae9b35df5e1f19d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.104849109Z" level=info msg="ignoring event" container=770a29499e6e930f269e4e5b39490740db45b29f17121066196f2599442683ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.105194150Z" level=info msg="ignoring event" container=320e10ab8c4be46d54dad8581f5f61b07912035816e7871908dda3a5db255816 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.105213835Z" level=info msg="ignoring event" container=c713634fb80ad479d4a08039068f12bfc427861d69e1782e243d8b6089d2a887 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.106521460Z" level=info msg="ignoring event" container=504c6c808b98e5acb4e7bf359af4dd2d3da0c84563f389c115103d69f3ad590d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.108194367Z" level=info msg="ignoring event" container=93e64357bd75a503a7406f6c1ce549e22983614a4e1274e60d9649b98982fca1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.111744908Z" level=info msg="ignoring event" container=387b2c9358c7c973af432768069ce8bf2c0dbc7c40cf4ac50185db00d99801cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:26 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:26.118828556Z" level=info msg="ignoring event" container=f638f573382526273262dc01cd18f1622b10e855a4fbe4456c3700a9984ec036 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:27 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:27.154155753Z" level=info msg="ignoring event" container=04b25d71eef08b29eb84126475ddda2257ecef0d4c639772db6c9b8a9168bce2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:27 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:27.305449237Z" level=info msg="ignoring event" container=0e85c7a19c03a722d17f105330d7e32c82b447916f3b6a4c7a9bc633648ec078 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:27 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:27.807281583Z" level=info msg="ignoring event" container=07744dd1cae08687fe0c54045e6d221b87d675cc0d34d18d15b0b8b627a9f6cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:31 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:31.027430261Z" level=info msg="ignoring event" container=330c786bb7a4c78e9822c40eb6132fae67ba355f7e94b97f8221ed19090eeed4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:33 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:33.303307819Z" level=info msg="ignoring event" container=2354e11645ae48f1d0342e0bb4449f811e9dd3b22c5176e47ee0ff29268db5e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:34 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:34.655952541Z" level=info msg="ignoring event" container=3a6697485f6cd12ae1f3d88ed0973abc1c4a9c8f83222b333386d4babc5fa3a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:17:34 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:17:34.689236532Z" level=info msg="ignoring event" container=8ee4e92e78f7c22bdb2ef16614789727071a6ca16d71162b58998b01a42af170 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:18:39 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:18:39.804879421Z" level=info msg="ignoring event" container=9b6c8d82970abbce37cb9b7582e4a0888a7318f9f1861fa8615b4858a8229db3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:18:39 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:18:39.850171702Z" level=info msg="ignoring event" container=d5119b61410079ec1376d65b8f364794843e14899b3040aaa5be1068dafef2f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:18:52 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:18:52.854193307Z" level=info msg="ignoring event" container=f48d4e6276f434f3bb1865f60073adf216aaf43039dfdd54b01b3726e311f935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:18:54 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:18:54.505243468Z" level=info msg="ignoring event" container=31cd9540085474416ac503a643c8f86189a4b695ae1d057113aefa969dbe5245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:19:01 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:19:01.351903963Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 17:19:05 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:19:05.875039557Z" level=info msg="ignoring event" container=6e12ca963a2c4ead0aad71f506507059c9ce65d37f1fc36a620eae94f8c7abc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:19:06 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:19:06.141789593Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	May 31 17:19:07 functional-20220531101620-2169 dockerd[380]: time="2022-05-31T17:19:07.580609519Z" level=warning msg="reference for unknown type: " digest="sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172" remote="docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID
	742faac9bf613       kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172   4 minutes ago       Running             dashboard-metrics-scraper   0                   f0ba401cf247b
	8c29942634080       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2         4 minutes ago       Running             kubernetes-dashboard        0                   bca8fd147722d
	f48d4e6276f43       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    5 minutes ago       Exited              mount-munger                0                   31cd954008547
	70d93cfda69d0       82e4c8a736a4f                                                                                          5 minutes ago       Running             echoserver                  0                   6cd3de63233ca
	0916afd71884d       nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514                          5 minutes ago       Running             myfrontend                  0                   836cab80183da
	a44cb264a1e96       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          5 minutes ago       Running             echoserver                  0                   a9363f11e517d
	757aa4bde7d57       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                          5 minutes ago       Running             nginx                       0                   f0a88f4f4b30a
	bff4f8e18c4b1       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                          5 minutes ago       Running             mysql                       0                   59168ac68baca
	57d27ccac7169       a4ca41631cc7a                                                                                          6 minutes ago       Running             coredns                     1                   efb1aab377505
	6c3e7ff9a1280       6e38f40d628db                                                                                          6 minutes ago       Running             storage-provisioner         2                   f11098efc08f6
	d084f913157bb       8fa62c12256df                                                                                          6 minutes ago       Running             kube-apiserver              1                   e53ef2896cabc
	2354e11645ae4       8fa62c12256df                                                                                          6 minutes ago       Exited              kube-apiserver              0                   e53ef2896cabc
	d43fb46f78e40       595f327f224a4                                                                                          6 minutes ago       Running             kube-scheduler              1                   e4c557c19efdb
	5fd58fbf308c1       4c03754524064                                                                                          6 minutes ago       Running             kube-proxy                  1                   368830bb6761b
	615fce5181ef9       df7b72818ad2e                                                                                          6 minutes ago       Running             kube-controller-manager     1                   c3b0ce5d2331d
	82d68c3847fd6       25f8c7f3da61c                                                                                          6 minutes ago       Running             etcd                        1                   1a01c7bed4686
	07744dd1cae08       6e38f40d628db                                                                                          6 minutes ago       Exited              storage-provisioner         1                   f11098efc08f6
	330c786bb7a4c       a4ca41631cc7a                                                                                          7 minutes ago       Exited              coredns                     0                   3eb5471290809
	7d5151ec1af11       4c03754524064                                                                                          7 minutes ago       Exited              kube-proxy                  0                   f638f57338252
	ceff9f25d9dde       25f8c7f3da61c                                                                                          7 minutes ago       Exited              etcd                        0                   387b2c9358c7c
	320e10ab8c4be       df7b72818ad2e                                                                                          7 minutes ago       Exited              kube-controller-manager     0                   c713634fb80ad
	04b25d71eef08       595f327f224a4                                                                                          7 minutes ago       Exited              kube-scheduler              0                   770a29499e6e9
	
	* 
	* ==> coredns [330c786bb7a4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [57d27ccac716] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220531101620-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220531101620-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=functional-20220531101620-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T10_16_42_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:16:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220531101620-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 17:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 17:19:35 +0000   Tue, 31 May 2022 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 17:19:35 +0000   Tue, 31 May 2022 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 17:19:35 +0000   Tue, 31 May 2022 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 17:19:35 +0000   Tue, 31 May 2022 17:17:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220531101620-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                c2e3ae0e-51a5-4755-9b65-1774ecc314cd
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-5vcng                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  default                     hello-node-connect-74cf8bc446-7kxtc                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  default                     mysql-b87c45988-7b49r                                     600m (10%!)(MISSING)    700m (11%!)(MISSING)  512Mi (8%!)(MISSING)       700Mi (11%!)(MISSING)    5m58s
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 coredns-64897985d-sflwf                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     7m6s
	  kube-system                 etcd-functional-20220531101620-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-apiserver-functional-20220531101620-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-functional-20220531101620-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-proxy-9pnrt                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 kube-scheduler-functional-20220531101620-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-58549894f-74hrc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-kb5nt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (22%!)(MISSING)  700m (11%!)(MISSING)
	  memory             682Mi (11%!)(MISSING)  870Mi (14%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 7m5s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  7m25s (x5 over 7m25s)  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x5 over 7m25s)  kubelet     Node functional-20220531101620-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m25s (x4 over 7m25s)  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m25s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m19s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m19s                  kubelet     Starting kubelet.
	  Normal  NodeReady                7m19s                  kubelet     Node functional-20220531101620-2169 status is now: NodeReady
	  Normal  Starting                 6m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s                  kubelet     Node functional-20220531101620-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m29s                  kubelet     Node functional-20220531101620-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m29s                  kubelet     Node functional-20220531101620-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001413] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001093] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001737] FS-Cache: N-cookie d=0000000038acf5de n=000000008809a18b
	[  +0.001435] FS-Cache: N-key=[8] '751ad70200000000'
	[  +0.001928] FS-Cache: Duplicate cookie detected
	[  +0.001010] FS-Cache: O-cookie c=000000002a5eed4b [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001783] FS-Cache: O-cookie d=0000000038acf5de n=000000006a3a9612
	[  +0.001418] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001104] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001740] FS-Cache: N-cookie d=0000000038acf5de n=000000002ffefb64
	[  +0.001430] FS-Cache: N-key=[8] '751ad70200000000'
	[  +3.329767] FS-Cache: Duplicate cookie detected
	[  +0.001037] FS-Cache: O-cookie c=00000000b56bf5b4 [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001856] FS-Cache: O-cookie d=0000000038acf5de n=00000000b91e189d
	[  +0.001481] FS-Cache: O-key=[8] '741ad70200000000'
	[  +0.001123] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001784] FS-Cache: N-cookie d=0000000038acf5de n=00000000eccdb4bc
	[  +0.001461] FS-Cache: N-key=[8] '741ad70200000000'
	[  +0.431860] FS-Cache: Duplicate cookie detected
	[  +0.001026] FS-Cache: O-cookie c=000000004a859abe [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001835] FS-Cache: O-cookie d=0000000038acf5de n=00000000e6b4c68e
	[  +0.001495] FS-Cache: O-key=[8] '811ad70200000000'
	[  +0.001101] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001734] FS-Cache: N-cookie d=0000000038acf5de n=00000000648703d1
	[  +0.001443] FS-Cache: N-key=[8] '811ad70200000000'
	
	* 
	* ==> etcd [82d68c3847fd] <==
	* {"level":"info","ts":"2022-05-31T17:17:27.835Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-05-31T17:17:27.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-05-31T17:17:27.836Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-05-31T17:17:27.836Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:17:27.836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:17:27.838Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:17:27.839Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:17:27.839Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:17:27.840Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:17:27.840Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-05-31T17:17:29.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-05-31T17:17:29.531Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220531101620-2169 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:17:29.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:17:29.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:17:29.532Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:17:29.532Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:17:29.532Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T17:17:29.533Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:18:13.246Z","caller":"traceutil/trace.go:171","msg":"trace[1995531147] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"121.268751ms","start":"2022-05-31T17:18:13.125Z","end":"2022-05-31T17:18:13.246Z","steps":["trace[1995531147] 'process raft request'  (duration: 118.092894ms)"],"step_count":1}
	
	* 
	* ==> etcd [ceff9f25d9dd] <==
	* {"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220531101620-2169 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:16:37.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:16:37.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:16:37.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T17:16:37.966Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:16:37.966Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:16:37.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:16:37.967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:16:37.967Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:17:25.916Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-31T17:17:25.916Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220531101620-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/05/31 17:17:25 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/31 17:17:25 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-31T17:17:25.926Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-05-31T17:17:25.927Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:17:25.929Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:17:25.929Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220531101620-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:24:01 up 11 min,  0 users,  load average: 0.40, 0.52, 0.43
	Linux functional-20220531101620-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2354e11645ae] <==
	* I0531 17:17:33.284201       1 server.go:565] external host was not specified, using 192.168.49.2
	I0531 17:17:33.284650       1 server.go:172] Version: v1.23.6
	E0531 17:17:33.284874       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-apiserver [d084f913157b] <==
	* E0531 17:17:37.202460       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 17:17:37.210905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:17:37.211014       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:17:37.211736       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:17:37.213356       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:17:37.228529       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:17:38.110331       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:17:38.113572       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:17:38.115772       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:17:42.757208       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:17:43.250899       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:17:43.426235       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:18:03.039607       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.101.244.71]
	I0531 17:18:03.044138       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:18:03.078963       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:18:03.102958       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 17:18:21.174440       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.103.42.251]
	I0531 17:18:30.721229       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.97.52.129]
	I0531 17:18:41.837553       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.98.59.150]
	I0531 17:19:00.125198       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 17:19:00.139798       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:19:00.166752       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:19:00.174864       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:19:00.333486       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.230.21]
	I0531 17:19:00.345006       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.187.168]
	
	* 
	* ==> kube-controller-manager [320e10ab8c4b] <==
	* I0531 17:16:54.584895       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 17:16:54.585021       1 shared_informer.go:247] Caches are synced for namespace 
	I0531 17:16:54.585981       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0531 17:16:54.587216       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0531 17:16:54.587471       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0531 17:16:54.588985       1 shared_informer.go:247] Caches are synced for service account 
	I0531 17:16:54.589283       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:16:54.591282       1 shared_informer.go:247] Caches are synced for expand 
	I0531 17:16:54.597142       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:16:54.734254       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:16:54.735733       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0531 17:16:54.735822       1 shared_informer.go:247] Caches are synced for cronjob 
	I0531 17:16:54.735868       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:16:54.736150       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:16:54.745817       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:16:54.787161       1 shared_informer.go:247] Caches are synced for job 
	I0531 17:16:54.791486       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:16:55.211930       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:16:55.242663       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9pnrt"
	I0531 17:16:55.248976       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:16:55.249006       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:16:55.490885       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-xsc8p"
	I0531 17:16:55.495130       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-sflwf"
	I0531 17:16:55.942372       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:16:55.945491       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-xsc8p"
	
	* 
	* ==> kube-controller-manager [615fce5181ef] <==
	* I0531 17:17:43.924715       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:17:43.924853       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:18:03.081178       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0531 17:18:03.095911       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-7b49r"
	I0531 17:18:27.430805       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0531 17:18:30.665603       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0531 17:18:30.668606       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-7kxtc"
	I0531 17:18:41.789868       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0531 17:18:41.792916       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-5vcng"
	I0531 17:19:00.156261       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-58549894f to 1"
	I0531 17:19:00.163218       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 17:19:00.164581       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 17:19:00.169166       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 17:19:00.169875       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 17:19:00.172334       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 17:19:00.172497       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 17:19:00.173663       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 17:19:00.178696       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 17:19:00.178833       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 17:19:00.181263       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 17:19:00.181326       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 17:19:00.183397       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 17:19:00.183457       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 17:19:00.281096       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-74hrc"
	I0531 17:19:00.281476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-kb5nt"
	
	* 
	* ==> kube-proxy [5fd58fbf308c] <==
	* E0531 17:17:27.838857       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220531101620-2169": dial tcp 192.168.49.2:8441: connect: connection refused
	I0531 17:17:30.918862       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:17:30.918949       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:17:30.918980       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:17:31.006615       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:17:31.006667       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:17:31.006692       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:17:31.006945       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:17:31.007317       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:17:31.009549       1 config.go:317] "Starting service config controller"
	I0531 17:17:31.009839       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:17:31.009986       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:17:31.010026       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:17:31.110266       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:17:31.110350       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [7d5151ec1af1] <==
	* I0531 17:16:55.796081       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:16:55.796186       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:16:55.796236       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:16:55.812358       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:16:55.812435       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:16:55.812451       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:16:55.812467       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:16:55.812851       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:16:55.813764       1 config.go:317] "Starting service config controller"
	I0531 17:16:55.813871       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:16:55.814472       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:16:55.814547       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:16:55.914041       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:16:55.915208       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [04b25d71eef0] <==
	* I0531 17:16:39.663341       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0531 17:16:39.663609       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:16:39.663639       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:16:39.663871       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:16:39.663901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:16:39.663943       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:16:39.663997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:16:39.664332       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:16:39.664365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:16:39.664464       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:16:39.664511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:16:39.666807       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:16:39.666843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:16:39.666933       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:16:39.667048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:16:39.667181       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:16:39.667210       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:16:40.630227       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:16:40.630263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:16:40.721693       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:16:40.721731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0531 17:16:41.263649       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0531 17:17:25.932714       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0531 17:17:25.932878       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 17:17:25.933302       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [d43fb46f78e4] <==
	* I0531 17:17:28.455810       1 serving.go:348] Generated self-signed cert in-memory
	W0531 17:17:30.913947       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 17:17:30.913979       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:17:30.913986       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 17:17:30.913990       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 17:17:30.924833       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 17:17:30.925801       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 17:17:30.925901       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 17:17:30.925908       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 17:17:30.925918       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 17:17:30.930907       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:17:30.930927       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:17:30.932398       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:17:30.932446       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:17:30.933542       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:17:30.933822       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 17:17:31.026858       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0531 17:17:37.204542       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0531 17:17:37.205472       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0531 17:17:37.206630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0531 17:17:37.207292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0531 17:17:37.207338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:16:27 UTC, end at Tue 2022-05-31 17:24:02 UTC. --
	May 31 17:18:50 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:50.828008    5510 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwqn\" (UniqueName: \"kubernetes.io/projected/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-kube-api-access-fcwqn\") pod \"busybox-mount\" (UID: \"f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6\") " pod="default/busybox-mount"
	May 31 17:18:51 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:51.224840    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	May 31 17:18:51 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:51.448017    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	May 31 17:18:53 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:53.468012    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.672293    5510 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-test-volume\") pod \"f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6\" (UID: \"f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6\") "
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.672652    5510 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcwqn\" (UniqueName: \"kubernetes.io/projected/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-kube-api-access-fcwqn\") pod \"f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6\" (UID: \"f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6\") "
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.672406    5510 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-test-volume" (OuterVolumeSpecName: "test-volume") pod "f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6" (UID: "f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.674526    5510 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-kube-api-access-fcwqn" (OuterVolumeSpecName: "kube-api-access-fcwqn") pod "f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6" (UID: "f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6"). InnerVolumeSpecName "kube-api-access-fcwqn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.773656    5510 reconciler.go:300] "Volume detached for volume \"kube-api-access-fcwqn\" (UniqueName: \"kubernetes.io/projected/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-kube-api-access-fcwqn\") on node \"functional-20220531101620-2169\" DevicePath \"\""
	May 31 17:18:54 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:54.773687    5510 reconciler.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6-test-volume\") on node \"functional-20220531101620-2169\" DevicePath \"\""
	May 31 17:18:55 functional-20220531101620-2169 kubelet[5510]: I0531 17:18:55.489291    5510 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="31cd9540085474416ac503a643c8f86189a4b695ae1d057113aefa969dbe5245"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.284113    5510 topology_manager.go:200] "Topology Admit Handler"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.288643    5510 topology_manager.go:200] "Topology Admit Handler"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.419434    5510 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/89d953cf-f669-40f6-9a64-f158e14e2630-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-kb5nt\" (UID: \"89d953cf-f669-40f6-9a64-f158e14e2630\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-kb5nt"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.419524    5510 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/669f3594-e184-463b-9451-9c272695f4d9-tmp-volume\") pod \"dashboard-metrics-scraper-58549894f-74hrc\" (UID: \"669f3594-e184-463b-9451-9c272695f4d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-74hrc"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.419558    5510 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvmw\" (UniqueName: \"kubernetes.io/projected/669f3594-e184-463b-9451-9c272695f4d9-kube-api-access-slvmw\") pod \"dashboard-metrics-scraper-58549894f-74hrc\" (UID: \"669f3594-e184-463b-9451-9c272695f4d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-74hrc"
	May 31 17:19:00 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:00.419576    5510 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29fk\" (UniqueName: \"kubernetes.io/projected/89d953cf-f669-40f6-9a64-f158e14e2630-kube-api-access-d29fk\") pod \"kubernetes-dashboard-8469778f77-kb5nt\" (UID: \"89d953cf-f669-40f6-9a64-f158e14e2630\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-kb5nt"
	May 31 17:19:01 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:01.065119    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-kb5nt through plugin: invalid network status for"
	May 31 17:19:01 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:01.107402    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-74hrc through plugin: invalid network status for"
	May 31 17:19:01 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:01.537068    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-kb5nt through plugin: invalid network status for"
	May 31 17:19:01 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:01.539095    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-74hrc through plugin: invalid network status for"
	May 31 17:19:06 functional-20220531101620-2169 kubelet[5510]: W0531 17:19:06.039875    5510 container.go:489] Failed to get RecentStats("/system.slice/docker-6e12ca963a2c4ead0aad71f506507059c9ce65d37f1fc36a620eae94f8c7abc2.scope") while determining the next housekeeping: unable to find data in memory cache
	May 31 17:19:07 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:07.586369    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-kb5nt through plugin: invalid network status for"
	May 31 17:19:09 functional-20220531101620-2169 kubelet[5510]: I0531 17:19:09.610010    5510 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-74hrc through plugin: invalid network status for"
	May 31 17:22:32 functional-20220531101620-2169 kubelet[5510]: W0531 17:22:32.422550    5510 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> kubernetes-dashboard [8c2994263408] <==
	* 2022/05/31 17:19:07 Using namespace: kubernetes-dashboard
	2022/05/31 17:19:07 Using in-cluster config to connect to apiserver
	2022/05/31 17:19:07 Using secret token for csrf signing
	2022/05/31 17:19:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 17:19:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 17:19:07 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 17:19:07 Generating JWE encryption key
	2022/05/31 17:19:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 17:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 17:19:07 Initializing JWE encryption key from synchronized object
	2022/05/31 17:19:07 Creating in-cluster Sidecar client
	2022/05/31 17:19:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 17:19:07 Serving insecurely on HTTP port: 9090
	2022/05/31 17:19:37 Successful request to sidecar
	2022/05/31 17:19:07 Starting overwatch
	
	* 
	* ==> storage-provisioner [07744dd1cae0] <==
	* I0531 17:17:27.707677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0531 17:17:27.712214       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [6c3e7ff9a128] <==
	* I0531 17:17:38.835069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 17:17:38.842890       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 17:17:38.842937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 17:17:56.243830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 17:17:56.244032       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220531101620-2169_db9c2232-e0ee-40e5-99e1-1632d4271838!
	I0531 17:17:56.243963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ad95a5b-f3bf-42fd-ab10-1379ce2cd403", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220531101620-2169_db9c2232-e0ee-40e5-99e1-1632d4271838 became leader
	I0531 17:17:56.344302       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220531101620-2169_db9c2232-e0ee-40e5-99e1-1632d4271838!
	I0531 17:18:27.430792       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0531 17:18:27.430844       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d92ed678-0d33-4ecb-b444-3131c5597a00 456 0 2022-05-31 17:16:57 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-05-31 17:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8 657 0 2022-05-31 17:18:27 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-05-31 17:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2022-05-31 17:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0531 17:18:27.431116       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8" provisioned
	I0531 17:18:27.431149       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0531 17:18:27.431153       1 volume_store.go:212] Trying to save persistentvolume "pvc-fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8"
	I0531 17:18:27.431680       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0531 17:18:27.435546       1 volume_store.go:219] persistentvolume "pvc-fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8" saved
	I0531 17:18:27.435647       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fddc2ee0-61b4-480f-ae1e-6f4c7e7715b8
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220531101620-2169 -n functional-20220531101620-2169
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220531101620-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220531101620-2169 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220531101620-2169 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220531101620-2169/192.168.49.2
	Start Time:   Tue, 31 May 2022 10:18:50 -0700
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.8
	IPs:
	  IP:  172.17.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://f48d4e6276f434f3bb1865f60073adf216aaf43039dfdd54b01b3726e311f935
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 31 May 2022 10:18:52 -0700
	      Finished:     Tue, 31 May 2022 10:18:52 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fcwqn (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fcwqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220531101620-2169
	  Normal  Pulling    5m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.462960252s
	  Normal  Created    5m11s  kubelet            Created container mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (304.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220531102407-2169 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0531 10:24:08.497999    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:24:36.190340    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:28:03.103529    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.110012    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.122219    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.144436    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.185497    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.265986    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.427299    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:03.749561    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:04.391014    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:05.671460    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:08.231784    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:13.354057    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220531102407-2169 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.094552405s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220531102407-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220531102407-2169 in cluster ingress-addon-legacy-20220531102407-2169
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:24:07.469834    4335 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:24:07.470042    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:24:07.470048    4335 out.go:309] Setting ErrFile to fd 2...
	I0531 10:24:07.470052    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:24:07.470162    4335 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:24:07.470490    4335 out.go:303] Setting JSON to false
	I0531 10:24:07.485358    4335 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1416,"bootTime":1654016431,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:24:07.485470    4335 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:24:07.507611    4335 out.go:177] * [ingress-addon-legacy-20220531102407-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:24:07.529377    4335 notify.go:193] Checking for updates...
	I0531 10:24:07.551365    4335 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:24:07.573576    4335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:24:07.595445    4335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:24:07.617634    4335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:24:07.639566    4335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:24:07.661430    4335 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:24:07.731956    4335 docker.go:137] docker version: linux-20.10.14
	I0531 10:24:07.732111    4335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:24:07.856681    4335 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-05-31 17:24:07.802523244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:24:07.878752    4335 out.go:177] * Using the docker driver based on user configuration
	I0531 10:24:07.900559    4335 start.go:284] selected driver: docker
	I0531 10:24:07.900585    4335 start.go:806] validating driver "docker" against <nil>
	I0531 10:24:07.900616    4335 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:24:07.904061    4335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:24:08.029086    4335 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-05-31 17:24:07.974632381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:24:08.029357    4335 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 10:24:08.029518    4335 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 10:24:08.051107    4335 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 10:24:08.073249    4335 cni.go:95] Creating CNI manager for ""
	I0531 10:24:08.073280    4335 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:24:08.073304    4335 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220531102407-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220531102407-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:24:08.095207    4335 out.go:177] * Starting control plane node ingress-addon-legacy-20220531102407-2169 in cluster ingress-addon-legacy-20220531102407-2169
	I0531 10:24:08.138241    4335 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:24:08.160200    4335 out.go:177] * Pulling base image ...
	I0531 10:24:08.203317    4335 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0531 10:24:08.203335    4335 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:24:08.267266    4335 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:24:08.267305    4335 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 10:24:08.274498    4335 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0531 10:24:08.274511    4335 cache.go:57] Caching tarball of preloaded images
	I0531 10:24:08.274795    4335 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0531 10:24:08.318816    4335 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0531 10:24:08.340014    4335 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:24:08.440344    4335 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0531 10:24:12.947088    4335 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:24:12.947300    4335 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:24:13.556676    4335 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0531 10:24:13.556964    4335 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/config.json ...
	I0531 10:24:13.557026    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/config.json: {Name:mk2399cf1a9e27eb6f3915abcbf6dc99b201b830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:13.557385    4335 cache.go:206] Successfully downloaded all kic artifacts
	I0531 10:24:13.557449    4335 start.go:352] acquiring machines lock for ingress-addon-legacy-20220531102407-2169: {Name:mk8436989d616ae90b2f919454fe3f2891479105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:24:13.557578    4335 start.go:356] acquired machines lock for "ingress-addon-legacy-20220531102407-2169" in 102.414µs
	I0531 10:24:13.557620    4335 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220531102407-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202205311
02407-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 10:24:13.557727    4335 start.go:131] createHost starting for "" (driver="docker")
	I0531 10:24:13.580215    4335 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0531 10:24:13.580627    4335 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220531102407-2169" (driver="docker")
	I0531 10:24:13.580674    4335 client.go:168] LocalClient.Create starting
	I0531 10:24:13.580829    4335 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 10:24:13.580899    4335 main.go:134] libmachine: Decoding PEM data...
	I0531 10:24:13.580931    4335 main.go:134] libmachine: Parsing certificate...
	I0531 10:24:13.581032    4335 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 10:24:13.581110    4335 main.go:134] libmachine: Decoding PEM data...
	I0531 10:24:13.581131    4335 main.go:134] libmachine: Parsing certificate...
	I0531 10:24:13.581948    4335 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220531102407-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 10:24:13.647897    4335 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220531102407-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 10:24:13.647998    4335 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220531102407-2169] to gather additional debugging logs...
	I0531 10:24:13.648018    4335 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220531102407-2169
	W0531 10:24:13.709143    4335 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220531102407-2169 returned with exit code 1
	I0531 10:24:13.709170    4335 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220531102407-2169]: docker network inspect ingress-addon-legacy-20220531102407-2169: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220531102407-2169
	I0531 10:24:13.709187    4335 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220531102407-2169]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220531102407-2169
	
	** /stderr **
	I0531 10:24:13.709255    4335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 10:24:13.770817    4335 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000ec80] misses:0}
	I0531 10:24:13.770854    4335 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 10:24:13.770870    4335 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220531102407-2169 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 10:24:13.770931    4335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220531102407-2169
	I0531 10:24:13.901128    4335 network_create.go:99] docker network ingress-addon-legacy-20220531102407-2169 192.168.49.0/24 created
	I0531 10:24:13.901163    4335 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220531102407-2169" container
	I0531 10:24:13.901255    4335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 10:24:13.965949    4335 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220531102407-2169 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220531102407-2169 --label created_by.minikube.sigs.k8s.io=true
	I0531 10:24:14.027901    4335 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220531102407-2169
	I0531 10:24:14.028048    4335 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220531102407-2169-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220531102407-2169 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220531102407-2169:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 10:24:14.492393    4335 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220531102407-2169
	I0531 10:24:14.492461    4335 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0531 10:24:14.492476    4335 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 10:24:14.492559    4335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220531102407-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 10:24:18.734788    4335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220531102407-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (4.242180276s)
	I0531 10:24:18.734816    4335 kic.go:188] duration metric: took 4.242392 seconds to extract preloaded images to volume
	I0531 10:24:18.734917    4335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 10:24:18.858835    4335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220531102407-2169 --name ingress-addon-legacy-20220531102407-2169 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220531102407-2169 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220531102407-2169 --network ingress-addon-legacy-20220531102407-2169 --ip 192.168.49.2 --volume ingress-addon-legacy-20220531102407-2169:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 10:24:19.226742    4335 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Running}}
	I0531 10:24:19.294374    4335 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Status}}
	I0531 10:24:19.362802    4335 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220531102407-2169 stat /var/lib/dpkg/alternatives/iptables
	I0531 10:24:19.477648    4335 oci.go:247] the created container "ingress-addon-legacy-20220531102407-2169" has a running status.
	I0531 10:24:19.477680    4335 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa...
	I0531 10:24:19.550267    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 10:24:19.550321    4335 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 10:24:19.663311    4335 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Status}}
	I0531 10:24:19.729329    4335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 10:24:19.729346    4335 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220531102407-2169 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 10:24:19.851863    4335 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Status}}
	I0531 10:24:19.917625    4335 machine.go:88] provisioning docker machine ...
	I0531 10:24:19.917666    4335 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220531102407-2169"
	I0531 10:24:19.917749    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:19.984209    4335 main.go:134] libmachine: Using SSH client type: native
	I0531 10:24:19.984384    4335 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0531 10:24:19.984401    4335 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220531102407-2169 && echo "ingress-addon-legacy-20220531102407-2169" | sudo tee /etc/hostname
	I0531 10:24:20.106754    4335 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220531102407-2169
	
	I0531 10:24:20.106838    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:20.172763    4335 main.go:134] libmachine: Using SSH client type: native
	I0531 10:24:20.172923    4335 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0531 10:24:20.172939    4335 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220531102407-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220531102407-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220531102407-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 10:24:20.286046    4335 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 10:24:20.286063    4335 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 10:24:20.286092    4335 ubuntu.go:177] setting up certificates
	I0531 10:24:20.286099    4335 provision.go:83] configureAuth start
	I0531 10:24:20.286156    4335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:20.352322    4335 provision.go:138] copyHostCerts
	I0531 10:24:20.352355    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 10:24:20.352405    4335 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 10:24:20.352416    4335 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 10:24:20.352514    4335 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 10:24:20.352664    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 10:24:20.352702    4335 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 10:24:20.352710    4335 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 10:24:20.352768    4335 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 10:24:20.352876    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 10:24:20.352901    4335 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 10:24:20.352905    4335 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 10:24:20.352985    4335 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 10:24:20.353116    4335 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220531102407-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220531102407-2169]
	I0531 10:24:20.567490    4335 provision.go:172] copyRemoteCerts
	I0531 10:24:20.567542    4335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 10:24:20.567584    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:20.633793    4335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:24:20.717777    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 10:24:20.717846    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 10:24:20.734261    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 10:24:20.734324    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0531 10:24:20.750182    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 10:24:20.750253    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 10:24:20.766331    4335 provision.go:86] duration metric: configureAuth took 480.221706ms
	I0531 10:24:20.766343    4335 ubuntu.go:193] setting minikube options for container-runtime
	I0531 10:24:20.766483    4335 config.go:178] Loaded profile config "ingress-addon-legacy-20220531102407-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0531 10:24:20.766544    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:20.833239    4335 main.go:134] libmachine: Using SSH client type: native
	I0531 10:24:20.833377    4335 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0531 10:24:20.833394    4335 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 10:24:20.950880    4335 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 10:24:20.950896    4335 ubuntu.go:71] root file system type: overlay
	I0531 10:24:20.951040    4335 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 10:24:20.951113    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:21.017361    4335 main.go:134] libmachine: Using SSH client type: native
	I0531 10:24:21.017518    4335 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0531 10:24:21.017568    4335 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 10:24:21.137652    4335 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 10:24:21.137727    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:21.204296    4335 main.go:134] libmachine: Using SSH client type: native
	I0531 10:24:21.204452    4335 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0531 10:24:21.204466    4335 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 10:24:21.760952    4335 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:24:21.136957893 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 10:24:21.760971    4335 machine.go:91] provisioned docker machine in 1.843351651s
	I0531 10:24:21.760977    4335 client.go:171] LocalClient.Create took 8.180397791s
	I0531 10:24:21.760992    4335 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220531102407-2169" took 8.180468578s
	I0531 10:24:21.760999    4335 start.go:306] post-start starting for "ingress-addon-legacy-20220531102407-2169" (driver="docker")
	I0531 10:24:21.761003    4335 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 10:24:21.761066    4335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 10:24:21.761113    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:21.828277    4335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:24:21.913949    4335 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 10:24:21.917328    4335 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 10:24:21.917345    4335 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 10:24:21.917366    4335 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 10:24:21.917375    4335 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 10:24:21.917385    4335 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 10:24:21.917493    4335 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 10:24:21.917627    4335 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 10:24:21.917633    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> /etc/ssl/certs/21692.pem
	I0531 10:24:21.917780    4335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 10:24:21.924454    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:24:21.941311    4335 start.go:309] post-start completed in 180.305322ms
	I0531 10:24:21.941814    4335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:22.007961    4335 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/config.json ...
	I0531 10:24:22.008358    4335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:24:22.008408    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:22.073905    4335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:24:22.155983    4335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 10:24:22.160567    4335 start.go:134] duration metric: createHost completed in 8.602936401s
	I0531 10:24:22.160583    4335 start.go:81] releasing machines lock for "ingress-addon-legacy-20220531102407-2169", held for 8.603104711s
	I0531 10:24:22.160670    4335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:22.226806    4335 ssh_runner.go:195] Run: systemctl --version
	I0531 10:24:22.226808    4335 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 10:24:22.226878    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:22.226885    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:22.295395    4335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:24:22.297606    4335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:24:22.377879    4335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 10:24:22.510906    4335 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:24:22.520915    4335 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 10:24:22.520974    4335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 10:24:22.530436    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 10:24:22.542843    4335 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 10:24:22.607054    4335 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 10:24:22.667899    4335 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:24:22.677113    4335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 10:24:22.737578    4335 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 10:24:22.746977    4335 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:24:22.781842    4335 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:24:22.840157    4335 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	I0531 10:24:22.840319    4335 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220531102407-2169 dig +short host.docker.internal
	I0531 10:24:22.978196    4335 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 10:24:22.978382    4335 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 10:24:22.982567    4335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:24:22.992805    4335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:24:23.059470    4335 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0531 10:24:23.059552    4335 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:24:23.088633    4335 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0531 10:24:23.088647    4335 docker.go:541] Images already preloaded, skipping extraction
	I0531 10:24:23.088705    4335 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:24:23.117958    4335 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0531 10:24:23.117971    4335 cache_images.go:84] Images are preloaded, skipping loading
	I0531 10:24:23.118035    4335 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 10:24:23.190818    4335 cni.go:95] Creating CNI manager for ""
	I0531 10:24:23.190830    4335 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:24:23.190844    4335 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 10:24:23.190858    4335 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220531102407-2169 NodeName:ingress-addon-legacy-20220531102407-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:syst
emd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 10:24:23.190974    4335 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220531102407-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 10:24:23.191061    4335 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220531102407-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220531102407-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 10:24:23.191118    4335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0531 10:24:23.198083    4335 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 10:24:23.198126    4335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 10:24:23.205394    4335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0531 10:24:23.219121    4335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0531 10:24:23.231816    4335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I0531 10:24:23.244967    4335 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 10:24:23.248748    4335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:24:23.258100    4335 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169 for IP: 192.168.49.2
	I0531 10:24:23.258202    4335 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 10:24:23.258286    4335 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 10:24:23.258329    4335 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.key
	I0531 10:24:23.258342    4335 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.crt with IP's: []
	I0531 10:24:23.331135    4335 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.crt ...
	I0531 10:24:23.331143    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.crt: {Name:mkcf1618ca659b8f46e1333c5b046b36af698c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.331427    4335 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.key ...
	I0531 10:24:23.331440    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/client.key: {Name:mkd2939615c3a2aee2884186dbe86f2d0687451e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.331626    4335 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key.dd3b5fb2
	I0531 10:24:23.331641    4335 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 10:24:23.386965    4335 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt.dd3b5fb2 ...
	I0531 10:24:23.386974    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt.dd3b5fb2: {Name:mkda86d7be1723e7ce5b0372872f3c7283e2bb5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.387186    4335 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key.dd3b5fb2 ...
	I0531 10:24:23.387194    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key.dd3b5fb2: {Name:mkda1c39174622288981bc69bd5797e9be95983c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.387374    4335 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt
	I0531 10:24:23.387529    4335 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key
	I0531 10:24:23.387671    4335 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.key
	I0531 10:24:23.387687    4335 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.crt with IP's: []
	I0531 10:24:23.510500    4335 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.crt ...
	I0531 10:24:23.510508    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.crt: {Name:mkfffb7a7be9eddd126a397cb3d2ef1098981c62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.510732    4335 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.key ...
	I0531 10:24:23.510743    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.key: {Name:mk66edc6b69b640198c4bea65b9eb1b4c1ce9d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:24:23.510921    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 10:24:23.510946    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 10:24:23.510963    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 10:24:23.510979    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 10:24:23.511003    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 10:24:23.511023    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 10:24:23.511038    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 10:24:23.511053    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 10:24:23.511151    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 10:24:23.511187    4335 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 10:24:23.511198    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 10:24:23.511227    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 10:24:23.511253    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 10:24:23.511288    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 10:24:23.511344    4335 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:24:23.511387    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:24:23.511406    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem -> /usr/share/ca-certificates/2169.pem
	I0531 10:24:23.511428    4335 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> /usr/share/ca-certificates/21692.pem
	I0531 10:24:23.511876    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 10:24:23.528871    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 10:24:23.545072    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 10:24:23.562164    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531102407-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 10:24:23.578332    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 10:24:23.594637    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 10:24:23.612107    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 10:24:23.629516    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 10:24:23.645980    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 10:24:23.663087    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 10:24:23.679289    4335 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 10:24:23.695716    4335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 10:24:23.708879    4335 ssh_runner.go:195] Run: openssl version
	I0531 10:24:23.714019    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 10:24:23.721356    4335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:24:23.725197    4335 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:24:23.725241    4335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:24:23.730131    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 10:24:23.737572    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 10:24:23.745649    4335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 10:24:23.749607    4335 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 10:24:23.749649    4335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 10:24:23.754737    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 10:24:23.762201    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 10:24:23.769657    4335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 10:24:23.774185    4335 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 10:24:23.774224    4335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 10:24:23.779413    4335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 10:24:23.786947    4335 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220531102407-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220531102407-2169 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:24:23.787047    4335 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 10:24:23.815865    4335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 10:24:23.823957    4335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 10:24:23.831374    4335 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:24:23.831419    4335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:24:23.838631    4335 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:24:23.838662    4335 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:24:24.543122    4335 out.go:204]   - Generating certificates and keys ...
	I0531 10:24:27.261103    4335 out.go:204]   - Booting up control plane ...
	W0531 10:26:22.177187    4335 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220531102407-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220531102407-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:24:23.886028     831 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:24:27.248809     831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:24:27.249570     831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220531102407-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220531102407-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:24:23.886028     831 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:24:27.248809     831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:24:27.249570     831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 10:26:22.177237    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 10:26:22.608489    4335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:26:22.617620    4335 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:26:22.617672    4335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:26:22.624764    4335 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:26:22.624786    4335 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:26:23.319095    4335 out.go:204]   - Generating certificates and keys ...
	I0531 10:26:24.002772    4335 out.go:204]   - Booting up control plane ...
	I0531 10:28:18.967824    4335 kubeadm.go:397] StartCluster complete in 3m55.114107649s
	I0531 10:28:18.967902    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 10:28:18.995839    4335 logs.go:274] 0 containers: []
	W0531 10:28:18.995853    4335 logs.go:276] No container was found matching "kube-apiserver"
	I0531 10:28:18.995908    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 10:28:19.024530    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.024544    4335 logs.go:276] No container was found matching "etcd"
	I0531 10:28:19.024605    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 10:28:19.055738    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.055750    4335 logs.go:276] No container was found matching "coredns"
	I0531 10:28:19.055811    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 10:28:19.088906    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.088920    4335 logs.go:276] No container was found matching "kube-scheduler"
	I0531 10:28:19.088975    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 10:28:19.117904    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.117916    4335 logs.go:276] No container was found matching "kube-proxy"
	I0531 10:28:19.117974    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 10:28:19.145780    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.145793    4335 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 10:28:19.145851    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 10:28:19.173422    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.173435    4335 logs.go:276] No container was found matching "storage-provisioner"
	I0531 10:28:19.173498    4335 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 10:28:19.202308    4335 logs.go:274] 0 containers: []
	W0531 10:28:19.202321    4335 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 10:28:19.202328    4335 logs.go:123] Gathering logs for kubelet ...
	I0531 10:28:19.202334    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 10:28:19.241675    4335 logs.go:123] Gathering logs for dmesg ...
	I0531 10:28:19.241689    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 10:28:19.253132    4335 logs.go:123] Gathering logs for describe nodes ...
	I0531 10:28:19.253144    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 10:28:19.303914    4335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 10:28:19.303924    4335 logs.go:123] Gathering logs for Docker ...
	I0531 10:28:19.303930    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 10:28:19.317240    4335 logs.go:123] Gathering logs for container status ...
	I0531 10:28:19.317253    4335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 10:28:21.367772    4335 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05050836s)
	W0531 10:28:21.367929    4335 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:26:22.671370    3299 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:26:23.969192    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:26:23.970039    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 10:28:21.367944    4335 out.go:239] * 
	* 
	W0531 10:28:21.368111    4335 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:26:22.671370    3299 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:26:23.969192    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:26:23.970039    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:26:22.671370    3299 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:26:23.969192    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:26:23.970039    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:28:21.368127    4335 out.go:239] * 
	* 
	W0531 10:28:21.368692    4335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 10:28:21.432445    4335 out.go:177] 
	W0531 10:28:21.475819    4335 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:26:22.671370    3299 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:26:23.969192    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:26:23.970039    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0531 17:26:22.671370    3299 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:26:23.969192    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:26:23.970039    3299 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:28:21.475951    4335 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 10:28:21.476033    4335 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 10:28:21.518565    4335 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220531102407-2169 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220531102407-2169 addons enable ingress --alsologtostderr -v=5
E0531 10:28:23.595661    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:28:44.077884    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:29:08.562952    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:29:25.038164    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220531102407-2169 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.071347226s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:28:21.662177    4493 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:28:21.662504    4493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:28:21.662509    4493 out.go:309] Setting ErrFile to fd 2...
	I0531 10:28:21.662513    4493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:28:21.662604    4493 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:28:21.663056    4493 config.go:178] Loaded profile config "ingress-addon-legacy-20220531102407-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0531 10:28:21.663069    4493 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220531102407-2169"
	I0531 10:28:21.663078    4493 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220531102407-2169"
	I0531 10:28:21.663316    4493 host.go:66] Checking if "ingress-addon-legacy-20220531102407-2169" exists ...
	I0531 10:28:21.663778    4493 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Status}}
	I0531 10:28:21.751528    4493 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0531 10:28:21.773040    4493 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0531 10:28:21.794131    4493 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0531 10:28:21.815992    4493 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0531 10:28:21.838079    4493 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 10:28:21.838117    4493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0531 10:28:21.838247    4493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:28:21.906475    4493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:28:21.993357    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:22.045163    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:22.045184    4493 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:22.321907    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:22.376014    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:22.376031    4493 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:22.917594    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:22.970240    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:22.970261    4493 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:23.627585    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:23.678979    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:23.678993    4493 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:24.472413    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:24.523932    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:24.523956    4493 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:25.696480    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:25.748704    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:25.748720    4493 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:28.001985    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:28.053822    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:28.053842    4493 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:29.665008    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:29.715039    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:29.715054    4493 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:32.519642    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:32.569178    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:32.569195    4493 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:36.396395    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:36.446769    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:36.446787    4493 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:44.146514    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:44.198915    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:44.198934    4493 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:58.836821    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:28:58.886579    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:28:58.886595    4493 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:27.295100    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:29:27.345059    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:27.345078    4493 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:50.514771    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0531 10:29:50.565293    4493 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:50.565326    4493 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220531102407-2169"
	I0531 10:29:50.591939    4493 out.go:177] * Verifying ingress addon...
	I0531 10:29:50.614316    4493 out.go:177] 
	W0531 10:29:50.636169    4493 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220531102407-2169" does not exist: client config: context "ingress-addon-legacy-20220531102407-2169" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220531102407-2169" does not exist: client config: context "ingress-addon-legacy-20220531102407-2169" does not exist]
	W0531 10:29:50.636201    4493 out.go:239] * 
	* 
	W0531 10:29:50.639229    4493 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 10:29:50.660834    4493 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220531102407-2169
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220531102407-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0",
	        "Created": "2022-05-31T17:24:18.942958657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:24:19.238321227Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hosts",
	        "LogPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0-json.log",
	        "Name": "/ingress-addon-legacy-20220531102407-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220531102407-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220531102407-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220531102407-2169",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220531102407-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220531102407-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d327d7702d83060b0b15eb3f2dbc123a5269f276e80bb95d9d0822212cccf525",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52994"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52995"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d327d7702d83",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220531102407-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "046472d58d47",
	                        "ingress-addon-legacy-20220531102407-2169"
	                    ],
	                    "NetworkID": "ac7a16d732491f57dce51fcb17c147ccddc84563ddcaca6e7dd86f5f6d89ab13",
	                    "EndpointID": "6f3248b3e9461c53eab5c7e6f2bed2fb4a203dd40c55fb0e0250b4c916bfe634",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169: exit status 6 (433.429535ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:29:51.176888    4517 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220531102407-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220531102407-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220531102407-2169 addons enable ingress-dns --alsologtostderr -v=5
E0531 10:30:46.960432    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220531102407-2169 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.039788301s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:29:51.235478    4527 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:29:51.235783    4527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:29:51.235788    4527 out.go:309] Setting ErrFile to fd 2...
	I0531 10:29:51.235792    4527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:29:51.235881    4527 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:29:51.236314    4527 config.go:178] Loaded profile config "ingress-addon-legacy-20220531102407-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0531 10:29:51.236326    4527 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220531102407-2169"
	I0531 10:29:51.236333    4527 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220531102407-2169"
	I0531 10:29:51.236555    4527 host.go:66] Checking if "ingress-addon-legacy-20220531102407-2169" exists ...
	I0531 10:29:51.237019    4527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220531102407-2169 --format={{.State.Status}}
	I0531 10:29:51.324668    4527 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0531 10:29:51.346685    4527 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0531 10:29:51.368423    4527 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 10:29:51.368460    4527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0531 10:29:51.368599    4527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220531102407-2169
	I0531 10:29:51.437893    4527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/ingress-addon-legacy-20220531102407-2169/id_rsa Username:docker}
	I0531 10:29:51.527468    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:51.575974    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:51.575994    4527 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:51.854426    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:51.908167    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:51.908182    4527 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:52.448551    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:52.527551    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:52.527570    4527 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:53.182960    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:53.234942    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:53.234958    4527 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:54.026483    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:54.078561    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:54.078575    4527 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:55.250594    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:55.303771    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:55.303795    4527 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:57.559161    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:57.610035    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:57.610054    4527 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:59.222963    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:29:59.274818    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:29:59.274833    4527 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:02.081440    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:30:02.132012    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:02.132029    4527 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:05.959230    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:30:06.011349    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:06.011363    4527 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:13.709342    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:30:13.762761    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:13.762774    4527 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:28.400594    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:30:28.452277    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:28.452291    4527 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:56.861075    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:30:56.911778    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:30:56.911792    4527 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:31:20.082359    4527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0531 10:31:20.134120    4527 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0531 10:31:20.156234    4527 out.go:177] 
	W0531 10:31:20.177980    4527 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0531 10:31:20.178012    4527 out.go:239] * 
	* 
	W0531 10:31:20.181089    4527 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 10:31:20.202980    4527 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220531102407-2169
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220531102407-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0",
	        "Created": "2022-05-31T17:24:18.942958657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:24:19.238321227Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hosts",
	        "LogPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0-json.log",
	        "Name": "/ingress-addon-legacy-20220531102407-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220531102407-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220531102407-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220531102407-2169",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220531102407-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220531102407-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d327d7702d83060b0b15eb3f2dbc123a5269f276e80bb95d9d0822212cccf525",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52994"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52995"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d327d7702d83",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220531102407-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "046472d58d47",
	                        "ingress-addon-legacy-20220531102407-2169"
	                    ],
	                    "NetworkID": "ac7a16d732491f57dce51fcb17c147ccddc84563ddcaca6e7dd86f5f6d89ab13",
	                    "EndpointID": "6f3248b3e9461c53eab5c7e6f2bed2fb4a203dd40c55fb0e0250b4c916bfe634",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169: exit status 6 (419.892697ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:31:20.706109    4583 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220531102407-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220531102407-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220531102407-2169
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220531102407-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0",
	        "Created": "2022-05-31T17:24:18.942958657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:24:19.238321227Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/hosts",
	        "LogPath": "/var/lib/docker/containers/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0/046472d58d47d8bdca6e05c036c29896eb7368e580ccfbc94986e59f3df17dd0-json.log",
	        "Name": "/ingress-addon-legacy-20220531102407-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220531102407-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220531102407-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fa70957331c6116e033ef306933aa3d944d08b39d7d5b7be783e815a26e45cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220531102407-2169",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220531102407-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220531102407-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220531102407-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d327d7702d83060b0b15eb3f2dbc123a5269f276e80bb95d9d0822212cccf525",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52994"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52995"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d327d7702d83",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220531102407-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "046472d58d47",
	                        "ingress-addon-legacy-20220531102407-2169"
	                    ],
	                    "NetworkID": "ac7a16d732491f57dce51fcb17c147ccddc84563ddcaca6e7dd86f5f6d89ab13",
	                    "EndpointID": "6f3248b3e9461c53eab5c7e6f2bed2fb4a203dd40c55fb0e0250b4c916bfe634",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220531102407-2169 -n ingress-addon-legacy-20220531102407-2169: exit status 6 (419.968059ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:31:21.198256    4595 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220531102407-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220531102407-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.49s)

                                                
                                    
x
+
TestPreload (264.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220531104214-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0531 10:43:03.102549    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:44:08.562053    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:44:26.162736    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220531104214-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.698689473s)

                                                
                                                
-- stdout --
	* [test-preload-20220531104214-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220531104214-2169 in cluster test-preload-20220531104214-2169
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:42:14.861827    7706 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:42:14.861997    7706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:42:14.862002    7706 out.go:309] Setting ErrFile to fd 2...
	I0531 10:42:14.862006    7706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:42:14.862114    7706 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:42:14.862452    7706 out.go:303] Setting JSON to false
	I0531 10:42:14.878229    7706 start.go:115] hostinfo: {"hostname":"37309.local","uptime":2503,"bootTime":1654016431,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:42:14.878305    7706 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:42:14.900220    7706 out.go:177] * [test-preload-20220531104214-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:42:14.942413    7706 notify.go:193] Checking for updates...
	I0531 10:42:14.964067    7706 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:42:14.985114    7706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:42:15.007315    7706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:42:15.029336    7706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:42:15.050969    7706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:42:15.072504    7706 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:42:15.143785    7706 docker.go:137] docker version: linux-20.10.14
	I0531 10:42:15.143934    7706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:42:15.267816    7706 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-05-31 17:42:15.219008283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:42:15.289916    7706 out.go:177] * Using the docker driver based on user configuration
	I0531 10:42:15.332509    7706 start.go:284] selected driver: docker
	I0531 10:42:15.332534    7706 start.go:806] validating driver "docker" against <nil>
	I0531 10:42:15.332560    7706 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:42:15.335979    7706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:42:15.460371    7706 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-05-31 17:42:15.411647301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:42:15.460523    7706 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 10:42:15.460666    7706 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 10:42:15.482616    7706 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 10:42:15.504224    7706 cni.go:95] Creating CNI manager for ""
	I0531 10:42:15.504250    7706 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:42:15.504262    7706 start_flags.go:306] config:
	{Name:test-preload-20220531104214-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220531104214-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:42:15.548372    7706 out.go:177] * Starting control plane node test-preload-20220531104214-2169 in cluster test-preload-20220531104214-2169
	I0531 10:42:15.570218    7706 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:42:15.591262    7706 out.go:177] * Pulling base image ...
	I0531 10:42:15.634258    7706 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0531 10:42:15.634285    7706 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:42:15.634619    7706 cache.go:107] acquiring lock: {Name:mk07cc7f7559770f9f4d7a752db1371d8c246008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.635689    7706 cache.go:107] acquiring lock: {Name:mk367befcab7b19583b9065ff358ce3d1e6319d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636053    7706 cache.go:107] acquiring lock: {Name:mk898c2b505614abb5e4075d4e3cfdc5f3a0a2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636498    7706 cache.go:107] acquiring lock: {Name:mk9ec793ffbb36e8c5fc6e211d63fae9f1734dcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636586    7706 cache.go:107] acquiring lock: {Name:mkda553ce69dfc7e4ab91da6b6ce6ebf93afe3fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636604    7706 cache.go:107] acquiring lock: {Name:mk47f1b4bbafd1a0806c86e6a8faa4729b451233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636889    7706 cache.go:107] acquiring lock: {Name:mkc69dc7e945ed5584b3f245f3bfef355041bea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.636995    7706 cache.go:107] acquiring lock: {Name:mk146392b09dd28b1374103c887a353b86920eda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.637124    7706 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 10:42:15.637154    7706 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.538667ms
	I0531 10:42:15.637281    7706 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 10:42:15.637347    7706 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0531 10:42:15.637384    7706 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0531 10:42:15.637447    7706 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/config.json ...
	I0531 10:42:15.637511    7706 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0531 10:42:15.637517    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/config.json: {Name:mkec3b757d096c2b00c272787cdd3ba3e499349a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:15.637553    7706 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0531 10:42:15.637561    7706 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0531 10:42:15.637619    7706 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0531 10:42:15.637678    7706 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0531 10:42:15.643730    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:15.644931    7706 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0531 10:42:15.645097    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:15.645787    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:15.645889    7706 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0531 10:42:15.646042    7706 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0531 10:42:15.646287    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:15.701233    7706 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:42:15.701257    7706 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 10:42:15.701272    7706 cache.go:206] Successfully downloaded all kic artifacts
	I0531 10:42:15.701310    7706 start.go:352] acquiring machines lock for test-preload-20220531104214-2169: {Name:mkf57710c2494ad1fe4407fe7cbcaf5d8df60d58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:42:15.701436    7706 start.go:356] acquired machines lock for "test-preload-20220531104214-2169" in 115.08µs
	I0531 10:42:15.701462    7706 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220531104214-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220531104214-2169 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 10:42:15.701561    7706 start.go:131] createHost starting for "" (driver="docker")
	I0531 10:42:15.723271    7706 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 10:42:15.723522    7706 start.go:165] libmachine.API.Create for "test-preload-20220531104214-2169" (driver="docker")
	I0531 10:42:15.723552    7706 client.go:168] LocalClient.Create starting
	I0531 10:42:15.723644    7706 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 10:42:15.723685    7706 main.go:134] libmachine: Decoding PEM data...
	I0531 10:42:15.723698    7706 main.go:134] libmachine: Parsing certificate...
	I0531 10:42:15.723768    7706 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 10:42:15.723802    7706 main.go:134] libmachine: Decoding PEM data...
	I0531 10:42:15.723814    7706 main.go:134] libmachine: Parsing certificate...
	I0531 10:42:15.724238    7706 cli_runner.go:164] Run: docker network inspect test-preload-20220531104214-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 10:42:15.786375    7706 cli_runner.go:211] docker network inspect test-preload-20220531104214-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 10:42:15.786441    7706 network_create.go:272] running [docker network inspect test-preload-20220531104214-2169] to gather additional debugging logs...
	I0531 10:42:15.786455    7706 cli_runner.go:164] Run: docker network inspect test-preload-20220531104214-2169
	W0531 10:42:15.847619    7706 cli_runner.go:211] docker network inspect test-preload-20220531104214-2169 returned with exit code 1
	I0531 10:42:15.847640    7706 network_create.go:275] error running [docker network inspect test-preload-20220531104214-2169]: docker network inspect test-preload-20220531104214-2169: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220531104214-2169
	I0531 10:42:15.847657    7706 network_create.go:277] output of [docker network inspect test-preload-20220531104214-2169]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220531104214-2169
	
	** /stderr **
	I0531 10:42:15.847713    7706 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 10:42:15.909546    7706 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000d9c380] misses:0}
	I0531 10:42:15.909583    7706 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 10:42:15.909597    7706 network_create.go:115] attempt to create docker network test-preload-20220531104214-2169 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 10:42:15.909661    7706 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220531104214-2169
	I0531 10:42:16.003814    7706 network_create.go:99] docker network test-preload-20220531104214-2169 192.168.49.0/24 created
	I0531 10:42:16.003842    7706 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20220531104214-2169" container
	I0531 10:42:16.003917    7706 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 10:42:16.065820    7706 cli_runner.go:164] Run: docker volume create test-preload-20220531104214-2169 --label name.minikube.sigs.k8s.io=test-preload-20220531104214-2169 --label created_by.minikube.sigs.k8s.io=true
	I0531 10:42:16.126791    7706 oci.go:103] Successfully created a docker volume test-preload-20220531104214-2169
	I0531 10:42:16.126899    7706 cli_runner.go:164] Run: docker run --rm --name test-preload-20220531104214-2169-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220531104214-2169 --entrypoint /usr/bin/test -v test-preload-20220531104214-2169:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 10:42:16.196253    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0531 10:42:16.221548    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0531 10:42:16.238087    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0531 10:42:16.238583    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0531 10:42:16.259862    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0531 10:42:16.266232    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0531 10:42:16.282956    7706 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0531 10:42:16.372063    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0531 10:42:16.372083    7706 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 736.00693ms
	I0531 10:42:16.372095    7706 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0531 10:42:16.580573    7706 oci.go:107] Successfully prepared a docker volume test-preload-20220531104214-2169
	I0531 10:42:16.580600    7706 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0531 10:42:16.580672    7706 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 10:42:16.706167    7706 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220531104214-2169 --name test-preload-20220531104214-2169 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220531104214-2169 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220531104214-2169 --network test-preload-20220531104214-2169 --ip 192.168.49.2 --volume test-preload-20220531104214-2169:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 10:42:17.083862    7706 cli_runner.go:164] Run: docker container inspect test-preload-20220531104214-2169 --format={{.State.Running}}
	I0531 10:42:17.154737    7706 cli_runner.go:164] Run: docker container inspect test-preload-20220531104214-2169 --format={{.State.Status}}
	I0531 10:42:17.231606    7706 cli_runner.go:164] Run: docker exec test-preload-20220531104214-2169 stat /var/lib/dpkg/alternatives/iptables
	I0531 10:42:17.250535    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0531 10:42:17.250569    7706 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.614147487s
	I0531 10:42:17.250604    7706 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0531 10:42:17.354573    7706 oci.go:247] the created container "test-preload-20220531104214-2169" has a running status.
	I0531 10:42:17.354609    7706 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa...
	I0531 10:42:17.456158    7706 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 10:42:17.569459    7706 cli_runner.go:164] Run: docker container inspect test-preload-20220531104214-2169 --format={{.State.Status}}
	I0531 10:42:17.636021    7706 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 10:42:17.636037    7706 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220531104214-2169 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 10:42:17.782893    7706 cli_runner.go:164] Run: docker container inspect test-preload-20220531104214-2169 --format={{.State.Status}}
	I0531 10:42:17.849738    7706 machine.go:88] provisioning docker machine ...
	I0531 10:42:17.849772    7706 ubuntu.go:169] provisioning hostname "test-preload-20220531104214-2169"
	I0531 10:42:17.849871    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:17.916577    7706 main.go:134] libmachine: Using SSH client type: native
	I0531 10:42:17.916761    7706 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58830 <nil> <nil>}
	I0531 10:42:17.916779    7706 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220531104214-2169 && echo "test-preload-20220531104214-2169" | sudo tee /etc/hostname
	I0531 10:42:18.036481    7706 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220531104214-2169
	
	I0531 10:42:18.036561    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:18.104294    7706 main.go:134] libmachine: Using SSH client type: native
	I0531 10:42:18.104440    7706 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58830 <nil> <nil>}
	I0531 10:42:18.104458    7706 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220531104214-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220531104214-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220531104214-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 10:42:18.215358    7706 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 10:42:18.215378    7706 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 10:42:18.215398    7706 ubuntu.go:177] setting up certificates
	I0531 10:42:18.215404    7706 provision.go:83] configureAuth start
	I0531 10:42:18.215470    7706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220531104214-2169
	I0531 10:42:18.245122    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0531 10:42:18.245143    7706 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.610553705s
	I0531 10:42:18.245155    7706 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0531 10:42:18.282432    7706 provision.go:138] copyHostCerts
	I0531 10:42:18.282494    7706 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 10:42:18.282504    7706 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 10:42:18.282592    7706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 10:42:18.282777    7706 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 10:42:18.282783    7706 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 10:42:18.282841    7706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 10:42:18.282977    7706 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 10:42:18.282983    7706 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 10:42:18.283036    7706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 10:42:18.283204    7706 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220531104214-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220531104214-2169]
	I0531 10:42:18.509151    7706 provision.go:172] copyRemoteCerts
	I0531 10:42:18.509212    7706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 10:42:18.509253    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:18.560715    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0531 10:42:18.560740    7706 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 2.926108507s
	I0531 10:42:18.560756    7706 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0531 10:42:18.577294    7706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58830 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa Username:docker}
	I0531 10:42:18.609942    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0531 10:42:18.609969    7706 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 2.9736654s
	I0531 10:42:18.609997    7706 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0531 10:42:18.659648    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 10:42:18.677295    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0531 10:42:18.696312    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 10:42:18.713030    7706 provision.go:86] duration metric: configureAuth took 497.613617ms
	I0531 10:42:18.713041    7706 ubuntu.go:193] setting minikube options for container-runtime
	I0531 10:42:18.713175    7706 config.go:178] Loaded profile config "test-preload-20220531104214-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0531 10:42:18.713222    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:18.779780    7706 main.go:134] libmachine: Using SSH client type: native
	I0531 10:42:18.779991    7706 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58830 <nil> <nil>}
	I0531 10:42:18.780003    7706 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 10:42:18.889793    7706 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 10:42:18.889813    7706 ubuntu.go:71] root file system type: overlay
	I0531 10:42:18.889963    7706 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 10:42:18.890036    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:18.956633    7706 main.go:134] libmachine: Using SSH client type: native
	I0531 10:42:18.956769    7706 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58830 <nil> <nil>}
	I0531 10:42:18.956816    7706 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 10:42:18.996127    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0531 10:42:18.996151    7706 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.359858994s
	I0531 10:42:18.996160    7706 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0531 10:42:19.076279    7706 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 10:42:19.076356    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:19.135012    7706 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0531 10:42:19.135034    7706 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.498519297s
	I0531 10:42:19.135053    7706 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0531 10:42:19.135068    7706 cache.go:87] Successfully saved all images to host disk.
	I0531 10:42:19.142867    7706 main.go:134] libmachine: Using SSH client type: native
	I0531 10:42:19.143104    7706 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58830 <nil> <nil>}
	I0531 10:42:19.143122    7706 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 10:42:19.717198    7706 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:42:19.074269991 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 10:42:19.717221    7706 machine.go:91] provisioned docker machine in 1.867468193s
	I0531 10:42:19.717227    7706 client.go:171] LocalClient.Create took 3.993673563s
	I0531 10:42:19.717243    7706 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220531104214-2169" took 3.99372318s
	I0531 10:42:19.717250    7706 start.go:306] post-start starting for "test-preload-20220531104214-2169" (driver="docker")
	I0531 10:42:19.717253    7706 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 10:42:19.717319    7706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 10:42:19.717366    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:19.783721    7706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58830 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa Username:docker}
	I0531 10:42:19.867465    7706 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 10:42:19.895499    7706 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 10:42:19.895517    7706 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 10:42:19.895524    7706 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 10:42:19.895531    7706 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 10:42:19.895540    7706 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 10:42:19.895655    7706 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 10:42:19.895787    7706 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 10:42:19.895953    7706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 10:42:19.902987    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:42:19.920166    7706 start.go:309] post-start completed in 202.900651ms
	I0531 10:42:19.921001    7706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220531104214-2169
	I0531 10:42:19.987296    7706 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/config.json ...
	I0531 10:42:19.987757    7706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:42:19.987801    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:20.053136    7706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58830 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa Username:docker}
	I0531 10:42:20.137094    7706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 10:42:20.141298    7706 start.go:134] duration metric: createHost completed in 4.439728812s
	I0531 10:42:20.141316    7706 start.go:81] releasing machines lock for "test-preload-20220531104214-2169", held for 4.439872579s
	I0531 10:42:20.141401    7706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220531104214-2169
	I0531 10:42:20.211511    7706 ssh_runner.go:195] Run: systemctl --version
	I0531 10:42:20.211515    7706 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 10:42:20.211591    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:20.211597    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:20.282233    7706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58830 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa Username:docker}
	I0531 10:42:20.282773    7706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58830 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/test-preload-20220531104214-2169/id_rsa Username:docker}
	I0531 10:42:20.503643    7706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 10:42:20.513028    7706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:42:20.522252    7706 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 10:42:20.522307    7706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 10:42:20.531287    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 10:42:20.543468    7706 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 10:42:20.611403    7706 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 10:42:20.679282    7706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:42:20.689235    7706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 10:42:20.759231    7706 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 10:42:20.768622    7706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:42:20.804024    7706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:42:20.882602    7706 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	I0531 10:42:20.882762    7706 cli_runner.go:164] Run: docker exec -t test-preload-20220531104214-2169 dig +short host.docker.internal
	I0531 10:42:21.015441    7706 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 10:42:21.015543    7706 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 10:42:21.019855    7706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:42:21.029411    7706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220531104214-2169
	I0531 10:42:21.096013    7706 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0531 10:42:21.096071    7706 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:42:21.124166    7706 docker.go:610] Got preloaded images: 
	I0531 10:42:21.124178    7706 docker.go:616] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0531 10:42:21.124182    7706 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 10:42:21.131224    7706 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0531 10:42:21.131989    7706 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0531 10:42:21.132447    7706 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0531 10:42:21.133032    7706 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:42:21.133268    7706 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0531 10:42:21.133718    7706 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0531 10:42:21.134316    7706 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0531 10:42:21.134632    7706 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0531 10:42:21.137626    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:21.138634    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:21.140659    7706 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0531 10:42:21.140868    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:21.140948    7706 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0531 10:42:21.140963    7706 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0531 10:42:21.141211    7706 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0531 10:42:21.141593    7706 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0531 10:42:21.665414    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0531 10:42:21.695838    7706 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0531 10:42:21.695870    7706 docker.go:291] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0531 10:42:21.695921    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0531 10:42:21.698808    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0531 10:42:21.709108    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0531 10:42:21.725559    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0531 10:42:21.725703    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0531 10:42:21.727922    7706 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0531 10:42:21.727946    7706 docker.go:291] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0531 10:42:21.727997    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0531 10:42:21.733742    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0531 10:42:21.740183    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0531 10:42:21.749729    7706 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0531 10:42:21.749757    7706 docker.go:291] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0531 10:42:21.749787    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0531 10:42:21.749812    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0531 10:42:21.749818    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0531 10:42:21.787206    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0531 10:42:21.787555    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:42:21.793372    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0531 10:42:21.793493    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0531 10:42:21.803691    7706 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0531 10:42:21.803722    7706 docker.go:291] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0531 10:42:21.803791    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0531 10:42:21.823949    7706 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0531 10:42:21.823978    7706 docker.go:291] Removing image: k8s.gcr.io/coredns:1.6.5
	I0531 10:42:21.823994    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0531 10:42:21.824031    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0531 10:42:21.824107    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0531 10:42:21.828748    7706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0531 10:42:21.925861    7706 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0531 10:42:21.925888    7706 docker.go:291] Removing image: k8s.gcr.io/pause:3.1
	I0531 10:42:21.925929    7706 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0531 10:42:21.925953    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0531 10:42:21.925957    7706 docker.go:291] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:42:21.925982    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0531 10:42:21.926022    7706 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:42:21.926029    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0531 10:42:21.935768    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0531 10:42:21.935898    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0531 10:42:21.982616    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0531 10:42:21.982649    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0531 10:42:21.985595    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0531 10:42:21.985842    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0531 10:42:22.002367    7706 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0531 10:42:22.002430    7706 docker.go:291] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0531 10:42:22.002509    7706 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0531 10:42:22.035522    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0531 10:42:22.035530    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0531 10:42:22.035560    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0531 10:42:22.035656    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0531 10:42:22.039748    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0531 10:42:22.039883    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0531 10:42:22.051605    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0531 10:42:22.051644    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0531 10:42:22.123582    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0531 10:42:22.123625    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0531 10:42:22.123623    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0531 10:42:22.123651    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0531 10:42:22.125295    7706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0531 10:42:22.125410    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0531 10:42:22.200465    7706 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0531 10:42:22.200499    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0531 10:42:22.281692    7706 docker.go:258] Loading image: /var/lib/minikube/images/pause_3.1
	I0531 10:42:22.281710    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0531 10:42:22.547490    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0531 10:42:23.215291    7706 docker.go:258] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0531 10:42:23.215305    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0531 10:42:23.851290    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0531 10:42:23.851333    7706 docker.go:258] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0531 10:42:23.851355    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0531 10:42:24.722913    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0531 10:42:25.238290    7706 docker.go:258] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0531 10:42:25.238306    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0531 10:42:27.211832    7706 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (1.973514371s)
	I0531 10:42:27.211855    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0531 10:42:27.211875    7706 docker.go:258] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0531 10:42:27.211893    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0531 10:42:28.126117    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0531 10:42:28.126142    7706 docker.go:258] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0531 10:42:28.126150    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0531 10:42:29.198867    7706 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (1.072703391s)
	I0531 10:42:29.198881    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0531 10:42:29.198901    7706 docker.go:258] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0531 10:42:29.198909    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0531 10:42:30.209749    7706 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.01081928s)
	I0531 10:42:30.209762    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0531 10:42:30.209776    7706 docker.go:258] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0531 10:42:30.209789    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0531 10:42:33.268961    7706 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.059157381s)
	I0531 10:42:33.268979    7706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0531 10:42:33.269017    7706 cache_images.go:123] Successfully loaded all cached images
	I0531 10:42:33.269022    7706 cache_images.go:92] LoadImages completed in 12.144840538s
	I0531 10:42:33.269109    7706 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 10:42:33.343428    7706 cni.go:95] Creating CNI manager for ""
	I0531 10:42:33.343439    7706 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:42:33.343449    7706 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 10:42:33.343462    7706 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220531104214-2169 NodeName:test-preload-20220531104214-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 10:42:33.343554    7706 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220531104214-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 10:42:33.343634    7706 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220531104214-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220531104214-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 10:42:33.343689    7706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0531 10:42:33.351435    7706 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0531 10:42:33.351480    7706 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0531 10:42:33.359904    7706 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0531 10:42:33.359904    7706 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0531 10:42:33.359906    7706 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0531 10:42:34.501214    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0531 10:42:34.505584    7706 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0531 10:42:34.505609    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0531 10:42:34.895903    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0531 10:42:34.961806    7706 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0531 10:42:34.961840    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0531 10:42:35.633819    7706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:42:35.703681    7706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0531 10:42:35.765799    7706 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0531 10:42:35.765829    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0531 10:42:37.578931    7706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 10:42:37.586236    7706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0531 10:42:37.598687    7706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 10:42:37.610962    7706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0531 10:42:37.623616    7706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 10:42:37.627600    7706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:42:37.637105    7706 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169 for IP: 192.168.49.2
	I0531 10:42:37.637205    7706 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 10:42:37.637251    7706 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 10:42:37.637304    7706 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.key
	I0531 10:42:37.637315    7706 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.crt with IP's: []
	I0531 10:42:37.966871    7706 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.crt ...
	I0531 10:42:37.966885    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.crt: {Name:mkcab0fe8cc14ce5670351c198dde4ad762657a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:37.967222    7706 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.key ...
	I0531 10:42:37.967234    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/client.key: {Name:mk126bee900af1e66270bef42e7bd67397c1e9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:37.967459    7706 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key.dd3b5fb2
	I0531 10:42:37.967481    7706 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 10:42:38.148622    7706 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt.dd3b5fb2 ...
	I0531 10:42:38.148632    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt.dd3b5fb2: {Name:mk116eeb67a5124c7920915088f9cb781042118a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:38.148859    7706 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key.dd3b5fb2 ...
	I0531 10:42:38.148868    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key.dd3b5fb2: {Name:mkb0b58e79129bc25da500e6040902f34416faa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:38.149077    7706 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt
	I0531 10:42:38.149242    7706 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key
	I0531 10:42:38.149402    7706 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.key
	I0531 10:42:38.149420    7706 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.crt with IP's: []
	I0531 10:42:38.232917    7706 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.crt ...
	I0531 10:42:38.232926    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.crt: {Name:mkee247b9583752593a61968d77e1b2824c10617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:38.233147    7706 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.key ...
	I0531 10:42:38.233156    7706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.key: {Name:mk88dc509f57372b00b7fb0d553d364ff30ca13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:42:38.233525    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 10:42:38.233565    7706 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 10:42:38.233574    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 10:42:38.233605    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 10:42:38.233643    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 10:42:38.233683    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 10:42:38.233754    7706 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:42:38.234276    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 10:42:38.252956    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 10:42:38.270992    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 10:42:38.288607    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/test-preload-20220531104214-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 10:42:38.306178    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 10:42:38.323722    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 10:42:38.341243    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 10:42:38.358557    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 10:42:38.375364    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 10:42:38.392635    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 10:42:38.409320    7706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 10:42:38.426320    7706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 10:42:38.438796    7706 ssh_runner.go:195] Run: openssl version
	I0531 10:42:38.444424    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 10:42:38.452206    7706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:42:38.456163    7706 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:42:38.456198    7706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:42:38.461731    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 10:42:38.469500    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 10:42:38.477305    7706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 10:42:38.481337    7706 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 10:42:38.481392    7706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 10:42:38.486692    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 10:42:38.494330    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 10:42:38.501990    7706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 10:42:38.505696    7706 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 10:42:38.505735    7706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 10:42:38.510898    7706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 10:42:38.518411    7706 kubeadm.go:395] StartCluster: {Name:test-preload-20220531104214-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220531104214-2169 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0531 10:42:38.518497    7706 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 10:42:38.548467    7706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 10:42:38.555861    7706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 10:42:38.563114    7706 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:42:38.563165    7706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:42:38.570248    7706 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:42:38.570275    7706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:42:39.283520    7706 out.go:204]   - Generating certificates and keys ...
	I0531 10:42:42.252453    7706 out.go:204]   - Booting up control plane ...
	W0531 10:44:37.166634    7706 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220531104214-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220531104214-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:42:38.626016    1446 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:42:38.626150    1446 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:42:42.248567    1446 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:42:42.250342    1446 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220531104214-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220531104214-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:42:38.626016    1446 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:42:38.626150    1446 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:42:42.248567    1446 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:42:42.250342    1446 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 10:44:37.166669    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 10:44:37.589679    7706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:44:37.599041    7706 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:44:37.599087    7706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:44:37.606099    7706 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:44:37.606117    7706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:44:38.301720    7706 out.go:204]   - Generating certificates and keys ...
	I0531 10:44:38.963191    7706 out.go:204]   - Booting up control plane ...
	I0531 10:46:33.883397    7706 kubeadm.go:397] StartCluster complete in 3m55.365134875s
	I0531 10:46:33.883509    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 10:46:33.912983    7706 logs.go:274] 0 containers: []
	W0531 10:46:33.912995    7706 logs.go:276] No container was found matching "kube-apiserver"
	I0531 10:46:33.913051    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 10:46:33.942603    7706 logs.go:274] 0 containers: []
	W0531 10:46:33.942618    7706 logs.go:276] No container was found matching "etcd"
	I0531 10:46:33.942681    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 10:46:33.972326    7706 logs.go:274] 0 containers: []
	W0531 10:46:33.972339    7706 logs.go:276] No container was found matching "coredns"
	I0531 10:46:33.972398    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 10:46:34.001907    7706 logs.go:274] 0 containers: []
	W0531 10:46:34.001920    7706 logs.go:276] No container was found matching "kube-scheduler"
	I0531 10:46:34.001974    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 10:46:34.037134    7706 logs.go:274] 0 containers: []
	W0531 10:46:34.037147    7706 logs.go:276] No container was found matching "kube-proxy"
	I0531 10:46:34.037200    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 10:46:34.065833    7706 logs.go:274] 0 containers: []
	W0531 10:46:34.065845    7706 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 10:46:34.065898    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 10:46:34.095083    7706 logs.go:274] 0 containers: []
	W0531 10:46:34.095095    7706 logs.go:276] No container was found matching "storage-provisioner"
	I0531 10:46:34.095149    7706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 10:46:34.124015    7706 logs.go:274] 0 containers: []
	W0531 10:46:34.124028    7706 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 10:46:34.124035    7706 logs.go:123] Gathering logs for kubelet ...
	I0531 10:46:34.124042    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 10:46:34.162164    7706 logs.go:123] Gathering logs for dmesg ...
	I0531 10:46:34.162177    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 10:46:34.174968    7706 logs.go:123] Gathering logs for describe nodes ...
	I0531 10:46:34.174981    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 10:46:34.225752    7706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 10:46:34.225769    7706 logs.go:123] Gathering logs for Docker ...
	I0531 10:46:34.225775    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 10:46:34.239720    7706 logs.go:123] Gathering logs for container status ...
	I0531 10:46:34.239732    7706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 10:46:36.295684    7706 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055940414s)
	W0531 10:46:36.295794    7706 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:44:37.662771    3759 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:44:37.662824    3759 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:44:38.960829    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:44:38.961598    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 10:46:36.295809    7706 out.go:239] * 
	* 
	W0531 10:46:36.295940    7706 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:44:37.662771    3759 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:44:37.662824    3759 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:44:38.960829    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:44:38.961598    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:44:37.662771    3759 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:44:37.662824    3759 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:44:38.960829    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:44:38.961598    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:46:36.295957    7706 out.go:239] * 
	* 
	W0531 10:46:36.296492    7706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 10:46:36.360047    7706 out.go:177] 
	W0531 10:46:36.402573    7706 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:44:37.662771    3759 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:44:37.662824    3759 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:44:38.960829    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:44:38.961598    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0531 17:44:37.662771    3759 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0531 17:44:37.662824    3759 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0531 17:44:38.960829    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0531 17:44:38.961598    3759 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:46:36.402737    7706 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 10:46:36.402890    7706 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 10:46:36.444431    7706 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220531104214-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-05-31 10:46:36.551519 -0700 PDT m=+2070.088401688
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220531104214-2169
helpers_test.go:235: (dbg) docker inspect test-preload-20220531104214-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca",
	        "Created": "2022-05-31T17:42:16.789403891Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:42:17.095269689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca/hosts",
	        "LogPath": "/var/lib/docker/containers/a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca/a454daec28197eb5474de93855ffcb292b3146b099c2cfd2f22c34b8135760ca-json.log",
	        "Name": "/test-preload-20220531104214-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220531104214-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220531104214-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5015dc926346040a56534ca0aac5d7c3c58c15015f3949f938c58842bc35565c-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5015dc926346040a56534ca0aac5d7c3c58c15015f3949f938c58842bc35565c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5015dc926346040a56534ca0aac5d7c3c58c15015f3949f938c58842bc35565c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5015dc926346040a56534ca0aac5d7c3c58c15015f3949f938c58842bc35565c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220531104214-2169",
	                "Source": "/var/lib/docker/volumes/test-preload-20220531104214-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220531104214-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220531104214-2169",
	                "name.minikube.sigs.k8s.io": "test-preload-20220531104214-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03aed69efe7337f80aefaffa25369c1b4df275e46ea63d8c5c247d27738caa77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58833"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58834"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/03aed69efe73",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220531104214-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a454daec2819",
	                        "test-preload-20220531104214-2169"
	                    ],
	                    "NetworkID": "357a968a6ec3e299aa8e1ee06d0504268098ebd27f00a8932e83d10ac69ce1a3",
	                    "EndpointID": "900d2526f7c826ad88f7ea9e16b7ef7f36101feb3da5776f328ce05b8844e791",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220531104214-2169 -n test-preload-20220531104214-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220531104214-2169 -n test-preload-20220531104214-2169: exit status 6 (417.722863ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:46:37.030058    7875 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220531104214-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220531104214-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220531104214-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220531104214-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220531104214-2169: (2.513936665s)
--- FAIL: TestPreload (264.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker : exit status 70 (34.29209836s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220531105117-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3853054038
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:51:34.512014896 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220531105117-2169" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:51:50.518798864 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220531105117-2169", then "minikube start -p running-upgrade-20220531105117-2169 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 281.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 381.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:51:50.518798864 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker : exit status 70 (4.706224627s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220531105117-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1719662845
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220531105117-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.926291717.exe start -p running-upgrade-20220531105117-2169 --memory=2200 --vm-driver=docker : exit status 70 (4.526423893s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220531105117-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig243702957
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220531105117-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-05-31 10:52:04.717796 -0700 PDT m=+2398.287020882
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220531105117-2169
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220531105117-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf",
	        "Created": "2022-05-31T17:51:42.739989537Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 125400,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:51:42.963767323Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf/hosts",
	        "LogPath": "/var/lib/docker/containers/e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf/e3192666ba3a9bb06844e5da9378de37c94d1cc29779aad34c3f4bb1f8f0f6cf-json.log",
	        "Name": "/running-upgrade-20220531105117-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220531105117-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a7ddc7c6bc450629fc0f07ff8336e0bcffc5ce920e368370b9538bd7f6931b2c-init/diff:/var/lib/docker/overlay2/68730985f7cfd3b645dffaaf625a84e0f45a2e522a7bbd35c74f3e961455c955/diff:/var/lib/docker/overlay2/086a9a5d11913cdd684dceb8ac095d883dd96aeffd0e2f279790b7c3992d505d/diff:/var/lib/docker/overlay2/4a7767ee605e9d3846f50062d68dbb144b6c872e261ea175128352b6a2008186/diff:/var/lib/docker/overlay2/90cf826a4010a4a3587a817d18da915c42b4f8d827d97ec08235753517cf7cba/diff:/var/lib/docker/overlay2/eaa2a7e56e26bbbbe52325d4dd17430b5f88783e1d7106afef9cb75f9f64673a/diff:/var/lib/docker/overlay2/e79fa306793a060f9fc9b0e6d7b5ef03378cf4fbe65d7c89e8f0ccfcf0562282/diff:/var/lib/docker/overlay2/bba27b2a99740d20b41b7850c0375cecc063e583b9afd93a82a7cf23a44cb8f1/diff:/var/lib/docker/overlay2/6cf665e8f6ea0dc4d08cacc5e06e998a6fd9208a2e8197f3d9a7fc6f28369cbc/diff:/var/lib/docker/overlay2/c7213236b6f74adfad523b3a0745db25c9c3b5aaa7be452e8c7562ac9af55529/diff:/var/lib/docker/overlay2/e6b28f
3ff5c1a7df3787620c5367e76e5d082a2719852854a0059452497aac2d/diff:/var/lib/docker/overlay2/c68b5a0b50ed2410ef2428f9ca77e4af1a8ff0f3c90c1ba30ef5f42e7c2f0fe3/diff:/var/lib/docker/overlay2/3062e3729948d2242933a53d46e139d56542622bc84399d578827874566ec45d/diff:/var/lib/docker/overlay2/5ea2fa0caf63c907fa5f7230a4d86016224b7a8090e21ccd0fafbaacc9b02989/diff:/var/lib/docker/overlay2/d321375c7b5f3519273186dddf87e312e97664c8899baad733ed047158e48167/diff:/var/lib/docker/overlay2/51b4d7bff48b339142e73ea6bf81882193895d7beee21763c05808dc42417831/diff:/var/lib/docker/overlay2/6cc3fdbbe55a5101cad2d2f3a19f351f440ca4ce572bd9590d534a0d4e756871/diff:/var/lib/docker/overlay2/c7b81ca26ce547908b8589973f707ab55de536d55f4e91ff33c4ad44c6335157/diff:/var/lib/docker/overlay2/54518fc6c0f4bd67872c1a8f18d57e28e9977220eb6b786882bdee74547cfd52/diff:/var/lib/docker/overlay2/a70efa960030191dd9226c96dd524ab1af6b4c40f8037297a048af6ce65e7b91/diff:/var/lib/docker/overlay2/4287ba7d9b601768fcd455102b8577d6e47986dacfe67932cb862726d4269593/diff:/var/lib/d
ocker/overlay2/8cc5c99c5858de4fd5685625834a50fc3618c82d09969525ed7b0605000309eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7ddc7c6bc450629fc0f07ff8336e0bcffc5ce920e368370b9538bd7f6931b2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7ddc7c6bc450629fc0f07ff8336e0bcffc5ce920e368370b9538bd7f6931b2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7ddc7c6bc450629fc0f07ff8336e0bcffc5ce920e368370b9538bd7f6931b2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220531105117-2169",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220531105117-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220531105117-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220531105117-2169",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220531105117-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afdbc197477e6cfe2ee4cdf0308078eafcae692b16b70212b917fb2939c10a4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61690"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61691"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61689"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/afdbc197477e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "eb5f09274c46cf7513dd1bc2be28f6d95a5488a7f3c710d39bfc7097b6e7cd37",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "4a63e4a8a5c8fc043bcd14188ed64ac2860bba2a6c9a76a1f934032a0376ca21",
	                    "EndpointID": "eb5f09274c46cf7513dd1bc2be28f6d95a5488a7f3c710d39bfc7097b6e7cd37",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220531105117-2169 -n running-upgrade-20220531105117-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220531105117-2169 -n running-upgrade-20220531105117-2169: exit status 6 (430.502235ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:52:05.207210    9639 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220531105117-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220531105117-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220531105117-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220531105117-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220531105117-2169: (2.552016163s)
--- FAIL: TestRunningBinaryUpgrade (50.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (303.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0531 10:53:03.068773    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 10:54:00.041304    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.047688    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.059872    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.082138    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.122346    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.202965    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.363284    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:00.684170    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:01.324569    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:02.604723    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:05.228249    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 10:54:08.528117    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:54:10.348455    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m13.186452196s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220531105258-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220531105258-2169 in cluster kubernetes-upgrade-20220531105258-2169
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:52:58.561259    9961 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:52:58.561405    9961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:52:58.561411    9961 out.go:309] Setting ErrFile to fd 2...
	I0531 10:52:58.561415    9961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:52:58.561517    9961 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:52:58.561837    9961 out.go:303] Setting JSON to false
	I0531 10:52:58.576766    9961 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3147,"bootTime":1654016431,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:52:58.576917    9961 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:52:58.599255    9961 out.go:177] * [kubernetes-upgrade-20220531105258-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:52:58.641936    9961 notify.go:193] Checking for updates...
	I0531 10:52:58.663668    9961 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:52:58.684566    9961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:52:58.706034    9961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:52:58.728008    9961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:52:58.749988    9961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:52:58.773937    9961 config.go:178] Loaded profile config "cert-expiration-20220531105047-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:52:58.774196    9961 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:52:58.846808    9961 docker.go:137] docker version: linux-20.10.14
	I0531 10:52:58.846965    9961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:52:58.972847    9961 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:52:58.912035867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:52:58.995109    9961 out.go:177] * Using the docker driver based on user configuration
	I0531 10:52:59.016549    9961 start.go:284] selected driver: docker
	I0531 10:52:59.016567    9961 start.go:806] validating driver "docker" against <nil>
	I0531 10:52:59.016583    9961 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:52:59.018886    9961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:52:59.143338    9961 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:52:59.083369324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:52:59.143470    9961 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 10:52:59.143626    9961 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 10:52:59.165745    9961 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 10:52:59.193785    9961 cni.go:95] Creating CNI manager for ""
	I0531 10:52:59.193816    9961 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:52:59.193844    9961 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220531105258-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531105258-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:52:59.214028    9961 out.go:177] * Starting control plane node kubernetes-upgrade-20220531105258-2169 in cluster kubernetes-upgrade-20220531105258-2169
	I0531 10:52:59.256274    9961 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:52:59.278063    9961 out.go:177] * Pulling base image ...
	I0531 10:52:59.320074    9961 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:52:59.320119    9961 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:52:59.384687    9961 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:52:59.384765    9961 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 10:52:59.394607    9961 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 10:52:59.394627    9961 cache.go:57] Caching tarball of preloaded images
	I0531 10:52:59.394877    9961 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:52:59.438348    9961 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0531 10:52:59.459480    9961 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:52:59.556977    9961 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 10:53:03.520043    9961 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:53:03.520208    9961 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:53:04.067187    9961 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 10:53:04.067269    9961 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/config.json ...
	I0531 10:53:04.067294    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/config.json: {Name:mk7abac2a994ae598ffb6a71d075e0f4e7e73a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:04.067590    9961 cache.go:206] Successfully downloaded all kic artifacts
	I0531 10:53:04.067621    9961 start.go:352] acquiring machines lock for kubernetes-upgrade-20220531105258-2169: {Name:mk3d81b3376198a6c8d2e350b5439f1f4cb92e9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:53:04.067706    9961 start.go:356] acquired machines lock for "kubernetes-upgrade-20220531105258-2169" in 77.282µs
	I0531 10:53:04.067730    9961 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220531105258-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531105258
-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Po
rt:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 10:53:04.067777    9961 start.go:131] createHost starting for "" (driver="docker")
	I0531 10:53:04.110775    9961 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 10:53:04.111163    9961 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220531105258-2169" (driver="docker")
	I0531 10:53:04.111209    9961 client.go:168] LocalClient.Create starting
	I0531 10:53:04.111356    9961 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 10:53:04.111422    9961 main.go:134] libmachine: Decoding PEM data...
	I0531 10:53:04.111452    9961 main.go:134] libmachine: Parsing certificate...
	I0531 10:53:04.111546    9961 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 10:53:04.111610    9961 main.go:134] libmachine: Decoding PEM data...
	I0531 10:53:04.111631    9961 main.go:134] libmachine: Parsing certificate...
	I0531 10:53:04.112388    9961 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531105258-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 10:53:04.176376    9961 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531105258-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 10:53:04.176454    9961 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220531105258-2169] to gather additional debugging logs...
	I0531 10:53:04.176474    9961 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531105258-2169
	W0531 10:53:04.238308    9961 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531105258-2169 returned with exit code 1
	I0531 10:53:04.238333    9961 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220531105258-2169]: docker network inspect kubernetes-upgrade-20220531105258-2169: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220531105258-2169
	I0531 10:53:04.238356    9961 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220531105258-2169]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220531105258-2169
	
	** /stderr **
	I0531 10:53:04.238443    9961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 10:53:04.300926    9961 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00033e330] misses:0}
	I0531 10:53:04.300961    9961 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 10:53:04.300976    9961 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220531105258-2169 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 10:53:04.301037    9961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531105258-2169
	W0531 10:53:04.363215    9961 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531105258-2169 returned with exit code 1
	W0531 10:53:04.363281    9961 network_create.go:107] failed to create docker network kubernetes-upgrade-20220531105258-2169 192.168.49.0/24, will retry: subnet is taken
	I0531 10:53:04.363568    9961 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00033e330] amended:false}} dirty:map[] misses:0}
	I0531 10:53:04.363582    9961 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 10:53:04.363786    9961 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00033e330] amended:true}} dirty:map[192.168.49.0:0xc00033e330 192.168.58.0:0xc00000e350] misses:0}
	I0531 10:53:04.363802    9961 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 10:53:04.363809    9961 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220531105258-2169 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 10:53:04.363889    9961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531105258-2169
	I0531 10:53:04.456132    9961 network_create.go:99] docker network kubernetes-upgrade-20220531105258-2169 192.168.58.0/24 created
	I0531 10:53:04.456177    9961 kic.go:106] calculated static IP "192.168.58.2" for the "kubernetes-upgrade-20220531105258-2169" container
	I0531 10:53:04.456321    9961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 10:53:04.521886    9961 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220531105258-2169 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531105258-2169 --label created_by.minikube.sigs.k8s.io=true
	I0531 10:53:04.583528    9961 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220531105258-2169
	I0531 10:53:04.583657    9961 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220531105258-2169-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531105258-2169 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220531105258-2169:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 10:53:05.046115    9961 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220531105258-2169
	I0531 10:53:05.046149    9961 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:53:05.046161    9961 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 10:53:05.046275    9961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531105258-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 10:53:08.855083    9961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531105258-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (3.808777429s)
	I0531 10:53:08.855117    9961 kic.go:188] duration metric: took 3.809001 seconds to extract preloaded images to volume
	I0531 10:53:08.855234    9961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 10:53:08.980776    9961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531105258-2169 --name kubernetes-upgrade-20220531105258-2169 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531105258-2169 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531105258-2169 --network kubernetes-upgrade-20220531105258-2169 --ip 192.168.58.2 --volume kubernetes-upgrade-20220531105258-2169:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 10:53:09.366508    9961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531105258-2169 --format={{.State.Running}}
	I0531 10:53:09.439242    9961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531105258-2169 --format={{.State.Status}}
	I0531 10:53:09.511798    9961 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220531105258-2169 stat /var/lib/dpkg/alternatives/iptables
	I0531 10:53:09.640645    9961 oci.go:247] the created container "kubernetes-upgrade-20220531105258-2169" has a running status.
	I0531 10:53:09.640674    9961 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa...
	I0531 10:53:09.767096    9961 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 10:53:09.876333    9961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531105258-2169 --format={{.State.Status}}
	I0531 10:53:09.946715    9961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 10:53:09.946735    9961 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531105258-2169 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 10:53:10.076987    9961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531105258-2169 --format={{.State.Status}}
	I0531 10:53:10.146543    9961 machine.go:88] provisioning docker machine ...
	I0531 10:53:10.146585    9961 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220531105258-2169"
	I0531 10:53:10.146676    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:10.216075    9961 main.go:134] libmachine: Using SSH client type: native
	I0531 10:53:10.216281    9961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62562 <nil> <nil>}
	I0531 10:53:10.216294    9961 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220531105258-2169 && echo "kubernetes-upgrade-20220531105258-2169" | sudo tee /etc/hostname
	I0531 10:53:10.332832    9961 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220531105258-2169
	
	I0531 10:53:10.332905    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:10.403346    9961 main.go:134] libmachine: Using SSH client type: native
	I0531 10:53:10.403507    9961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62562 <nil> <nil>}
	I0531 10:53:10.403523    9961 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220531105258-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220531105258-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220531105258-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 10:53:10.514448    9961 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 10:53:10.514468    9961 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 10:53:10.514486    9961 ubuntu.go:177] setting up certificates
	I0531 10:53:10.514495    9961 provision.go:83] configureAuth start
	I0531 10:53:10.514549    9961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:10.584902    9961 provision.go:138] copyHostCerts
	I0531 10:53:10.584982    9961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 10:53:10.584990    9961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 10:53:10.585108    9961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 10:53:10.585287    9961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 10:53:10.585309    9961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 10:53:10.585367    9961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 10:53:10.587697    9961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 10:53:10.587703    9961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 10:53:10.587756    9961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 10:53:10.587869    9961 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220531105258-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220531105258-2169]
	I0531 10:53:10.752521    9961 provision.go:172] copyRemoteCerts
	I0531 10:53:10.752589    9961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 10:53:10.752649    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:10.823308    9961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62562 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:53:10.906754    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 10:53:10.923783    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0531 10:53:10.940551    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 10:53:10.957534    9961 provision.go:86] duration metric: configureAuth took 443.032986ms
	I0531 10:53:10.957547    9961 ubuntu.go:193] setting minikube options for container-runtime
	I0531 10:53:10.957671    9961 config.go:178] Loaded profile config "kubernetes-upgrade-20220531105258-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 10:53:10.957723    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:11.032948    9961 main.go:134] libmachine: Using SSH client type: native
	I0531 10:53:11.033122    9961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62562 <nil> <nil>}
	I0531 10:53:11.033138    9961 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 10:53:11.143305    9961 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 10:53:11.143317    9961 ubuntu.go:71] root file system type: overlay
	I0531 10:53:11.143465    9961 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 10:53:11.143547    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:11.214017    9961 main.go:134] libmachine: Using SSH client type: native
	I0531 10:53:11.214175    9961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62562 <nil> <nil>}
	I0531 10:53:11.214225    9961 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 10:53:11.334757    9961 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 10:53:11.334835    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:11.405956    9961 main.go:134] libmachine: Using SSH client type: native
	I0531 10:53:11.406119    9961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62562 <nil> <nil>}
	I0531 10:53:11.406133    9961 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 10:53:11.995532    9961 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:53:11.349623418 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 10:53:11.995554    9961 machine.go:91] provisioned docker machine in 1.849016497s
	I0531 10:53:11.995561    9961 client.go:171] LocalClient.Create took 7.884441728s
	I0531 10:53:11.995575    9961 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220531105258-2169" took 7.8845094s
	I0531 10:53:11.995580    9961 start.go:306] post-start starting for "kubernetes-upgrade-20220531105258-2169" (driver="docker")
	I0531 10:53:11.995584    9961 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 10:53:11.995662    9961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 10:53:11.995712    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.069439    9961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62562 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:53:12.152972    9961 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 10:53:12.156673    9961 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 10:53:12.156690    9961 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 10:53:12.156697    9961 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 10:53:12.156704    9961 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 10:53:12.156712    9961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 10:53:12.156821    9961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 10:53:12.156958    9961 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 10:53:12.157107    9961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 10:53:12.169352    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:53:12.186534    9961 start.go:309] post-start completed in 190.948455ms
	I0531 10:53:12.187036    9961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.257346    9961 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/config.json ...
	I0531 10:53:12.257799    9961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:53:12.257867    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.327657    9961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62562 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:53:12.409356    9961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 10:53:12.415239    9961 start.go:134] duration metric: createHost completed in 8.347555835s
	I0531 10:53:12.415265    9961 start.go:81] releasing machines lock for "kubernetes-upgrade-20220531105258-2169", held for 8.347645696s
	I0531 10:53:12.415345    9961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.486018    9961 ssh_runner.go:195] Run: systemctl --version
	I0531 10:53:12.486019    9961 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 10:53:12.486091    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.486138    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:12.561781    9961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62562 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:53:12.563361    9961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62562 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:53:12.643143    9961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 10:53:12.771721    9961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:53:12.781770    9961 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 10:53:12.781835    9961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 10:53:12.791034    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 10:53:12.805530    9961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 10:53:12.873799    9961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 10:53:12.944535    9961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:53:12.954190    9961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 10:53:13.020346    9961 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 10:53:13.029987    9961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:53:13.065951    9961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:53:13.144856    9961 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0531 10:53:13.145039    9961 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220531105258-2169 dig +short host.docker.internal
	I0531 10:53:13.287257    9961 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 10:53:13.287373    9961 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 10:53:13.291528    9961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:53:13.301392    9961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:53:13.371840    9961 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:53:13.371901    9961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:53:13.400764    9961 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 10:53:13.400784    9961 docker.go:541] Images already preloaded, skipping extraction
	I0531 10:53:13.400873    9961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:53:13.430232    9961 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 10:53:13.430251    9961 cache_images.go:84] Images are preloaded, skipping loading
	I0531 10:53:13.430313    9961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 10:53:13.506052    9961 cni.go:95] Creating CNI manager for ""
	I0531 10:53:13.506067    9961 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:53:13.506082    9961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 10:53:13.506102    9961 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220531105258-2169 NodeName:kubernetes-upgrade-20220531105258-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 10:53:13.506199    9961 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220531105258-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220531105258-2169
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 10:53:13.506273    9961 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220531105258-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531105258-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 10:53:13.506328    9961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0531 10:53:13.514136    9961 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 10:53:13.514196    9961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 10:53:13.521518    9961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0531 10:53:13.533944    9961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 10:53:13.547544    9961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0531 10:53:13.560632    9961 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 10:53:13.564587    9961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 10:53:13.592993    9961 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169 for IP: 192.168.58.2
	I0531 10:53:13.593099    9961 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 10:53:13.593148    9961 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 10:53:13.593191    9961 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.key
	I0531 10:53:13.593202    9961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.crt with IP's: []
	I0531 10:53:13.764422    9961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.crt ...
	I0531 10:53:13.764439    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.crt: {Name:mk238c9fc7d543c567a2ddf1e3790f3a9882926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.764738    9961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.key ...
	I0531 10:53:13.764752    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/client.key: {Name:mk557973ca2612be6048a41c3cf30f4f23d75ad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.764956    9961 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key.cee25041
	I0531 10:53:13.764972    9961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 10:53:13.838903    9961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt.cee25041 ...
	I0531 10:53:13.838913    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt.cee25041: {Name:mk7f03e931707442b94061f43ab53ffd0d4f7748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.839132    9961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key.cee25041 ...
	I0531 10:53:13.839140    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key.cee25041: {Name:mkdd1dbdc3c91f2703fa4ceba0868b6c096b0190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.839312    9961 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt
	I0531 10:53:13.839464    9961 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key
	I0531 10:53:13.839613    9961 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.key
	I0531 10:53:13.839629    9961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.crt with IP's: []
	I0531 10:53:13.990766    9961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.crt ...
	I0531 10:53:13.990780    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.crt: {Name:mk669155fd0a3a85b2ba0b6bd1d9d6e3e3dd70cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.991063    9961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.key ...
	I0531 10:53:13.991071    9961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.key: {Name:mk86674d823cd9416182185f6e304b84c66365a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:53:13.991459    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 10:53:13.991516    9961 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 10:53:13.991526    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 10:53:13.991560    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 10:53:13.991593    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 10:53:13.991624    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 10:53:13.991687    9961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:53:13.992195    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 10:53:14.009706    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 10:53:14.026527    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 10:53:14.043708    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubernetes-upgrade-20220531105258-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 10:53:14.060946    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 10:53:14.077907    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 10:53:14.095612    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 10:53:14.112343    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 10:53:14.129274    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 10:53:14.147182    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 10:53:14.164129    9961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 10:53:14.181042    9961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 10:53:14.193683    9961 ssh_runner.go:195] Run: openssl version
	I0531 10:53:14.199358    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 10:53:14.207199    9961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 10:53:14.210962    9961 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 10:53:14.211006    9961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 10:53:14.216108    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 10:53:14.223375    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 10:53:14.231450    9961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 10:53:14.235186    9961 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 10:53:14.235229    9961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 10:53:14.240440    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 10:53:14.248339    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 10:53:14.255643    9961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:53:14.259488    9961 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:53:14.259527    9961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:53:14.264676    9961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 10:53:14.272286    9961 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220531105258-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531105258-2169 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:53:14.272380    9961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 10:53:14.299684    9961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 10:53:14.309093    9961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 10:53:14.317098    9961 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:53:14.317149    9961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:53:14.324317    9961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:53:14.324345    9961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:53:15.034468    9961 out.go:204]   - Generating certificates and keys ...
	I0531 10:53:17.296158    9961 out.go:204]   - Booting up control plane ...
	W0531 10:55:12.208115    9961 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220531105258-2169 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220531105258-2169 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220531105258-2169 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220531105258-2169 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 10:55:12.208149    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 10:55:12.629479    9961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:55:12.639626    9961 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 10:55:12.639670    9961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 10:55:12.647481    9961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 10:55:12.647500    9961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 10:55:13.373292    9961 out.go:204]   - Generating certificates and keys ...
	I0531 10:55:14.077903    9961 out.go:204]   - Booting up control plane ...
	I0531 10:57:09.011588    9961 kubeadm.go:397] StartCluster complete in 3m54.742157267s
	I0531 10:57:09.011668    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 10:57:09.042494    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.042506    9961 logs.go:276] No container was found matching "kube-apiserver"
	I0531 10:57:09.042551    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 10:57:09.072761    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.072776    9961 logs.go:276] No container was found matching "etcd"
	I0531 10:57:09.072841    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 10:57:09.104537    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.104550    9961 logs.go:276] No container was found matching "coredns"
	I0531 10:57:09.104608    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 10:57:09.133943    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.133954    9961 logs.go:276] No container was found matching "kube-scheduler"
	I0531 10:57:09.134011    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 10:57:09.164549    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.164563    9961 logs.go:276] No container was found matching "kube-proxy"
	I0531 10:57:09.164615    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 10:57:09.193964    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.193977    9961 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 10:57:09.194036    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 10:57:09.226434    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.226452    9961 logs.go:276] No container was found matching "storage-provisioner"
	I0531 10:57:09.226515    9961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 10:57:09.260000    9961 logs.go:274] 0 containers: []
	W0531 10:57:09.260016    9961 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 10:57:09.260025    9961 logs.go:123] Gathering logs for describe nodes ...
	I0531 10:57:09.260032    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 10:57:09.320074    9961 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 10:57:09.320085    9961 logs.go:123] Gathering logs for Docker ...
	I0531 10:57:09.320096    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 10:57:09.334159    9961 logs.go:123] Gathering logs for container status ...
	I0531 10:57:09.334180    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 10:57:11.396639    9961 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062470779s)
	I0531 10:57:11.396801    9961 logs.go:123] Gathering logs for kubelet ...
	I0531 10:57:11.396812    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 10:57:11.446694    9961 logs.go:123] Gathering logs for dmesg ...
	I0531 10:57:11.446716    9961 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0531 10:57:11.465239    9961 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 10:57:11.465262    9961 out.go:239] * 
	* 
	W0531 10:57:11.465491    9961 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:57:11.465521    9961 out.go:239] * 
	* 
	W0531 10:57:11.466109    9961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 10:57:11.533864    9961 out.go:177] 
	W0531 10:57:11.575646    9961 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 10:57:11.575738    9961 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 10:57:11.575786    9961 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 10:57:11.617865    9961 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220531105258-2169
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220531105258-2169: (1.674299526s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220531105258-2169 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220531105258-2169 status --format={{.Host}}: exit status 7 (127.195027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (25.545866412s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220531105258-2169 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (412.077661ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220531105258-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220531105258-2169
	    minikube start -p kubernetes-upgrade-20220531105258-2169 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220531105258-21692 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220531105258-2169 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220531105258-2169 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (12.913684911s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-05-31 10:57:52.490163 -0700 PDT m=+2746.063609330
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220531105258-2169
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220531105258-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9",
	        "Created": "2022-05-31T17:53:09.059310761Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 142325,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:57:15.238867567Z",
	            "FinishedAt": "2022-05-31T17:57:12.28374489Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9/hosts",
	        "LogPath": "/var/lib/docker/containers/2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9/2306bfa33e32af29aac472ba555821530bcfbf7c7f356b0099b9b199c25645f9-json.log",
	        "Name": "/kubernetes-upgrade-20220531105258-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220531105258-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220531105258-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62bdcb02ec1dc1aebed762ba98518d8432773e9c8c7e4f5b6800ff22b8e215aa-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62bdcb02ec1dc1aebed762ba98518d8432773e9c8c7e4f5b6800ff22b8e215aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62bdcb02ec1dc1aebed762ba98518d8432773e9c8c7e4f5b6800ff22b8e215aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62bdcb02ec1dc1aebed762ba98518d8432773e9c8c7e4f5b6800ff22b8e215aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220531105258-2169",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220531105258-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220531105258-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220531105258-2169",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220531105258-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0088de26ef6d0bf2c18f403988377869bf18db34a13e2e27277187ff9e2f1fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63925"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63926"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63923"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63924"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c0088de26ef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220531105258-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2306bfa33e32",
	                        "kubernetes-upgrade-20220531105258-2169"
	                    ],
	                    "NetworkID": "86b01e7bfd8215be61d442c38103e7aeb52638aaa913f7e2c8f35e9f86245e44",
	                    "EndpointID": "06c0585bf87e9e210e86033f191440de85a067bcaab6b6ab14cce60ad8b4fcf0",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220531105258-2169 -n kubernetes-upgrade-20220531105258-2169
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220531105258-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220531105258-2169 logs -n 25: (3.152944006s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | docker-flags-20220531105018-2169       | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | ssh sudo systemctl show docker         |                                        |         |                |                     |                     |
	|         | --property=ExecStart --no-pager        |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-flag-20220531105017-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | force-systemd-flag-20220531105017-2169 |                                        |         |                |                     |                     |
	| delete  | -p                                     | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | docker-flags-20220531105018-2169       |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:51 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m     |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220531105047-2169       | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220531105117-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:52 PDT | 31 May 22 10:52 PDT |
	|         | running-upgrade-20220531105117-2169    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220531105207-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:52 PDT | 31 May 22 10:52 PDT |
	|         | missing-upgrade-20220531105207-2169    |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:54 PDT | 31 May 22 10:54 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:54 PDT | 31 May 22 10:54 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220531105422-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | stopped-upgrade-20220531105422-2169    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220531105422-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | stopped-upgrade-20220531105422-2169    |                                        |         |                |                     |                     |
	| start   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:56 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:56 PDT | 31 May 22 10:56 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	| logs    | pause-20220531105516-2169 logs         | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:56 PDT | 31 May 22 10:56 PDT |
	|         | -n 25                                  |                                        |         |                |                     |                     |
	| delete  | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	| stop    | -p                                     | kubernetes-upgrade-20220531105258-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | kubernetes-upgrade-20220531105258-2169 |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220531105707-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | NoKubernetes-20220531105707-2169       |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220531105258-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | kubernetes-upgrade-20220531105258-2169 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220531105707-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | NoKubernetes-20220531105707-2169       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker        |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220531105707-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | NoKubernetes-20220531105707-2169       |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220531105258-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:57 PDT | 31 May 22 10:57 PDT |
	|         | kubernetes-upgrade-20220531105258-2169 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 10:57:51
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 10:57:51.634274   11089 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:57:51.634450   11089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:57:51.634457   11089 out.go:309] Setting ErrFile to fd 2...
	I0531 10:57:51.634460   11089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:57:51.634560   11089 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:57:51.634923   11089 out.go:303] Setting JSON to false
	I0531 10:57:51.651131   11089 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3440,"bootTime":1654016431,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:57:51.651273   11089 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:57:51.673226   11089 out.go:177] * [NoKubernetes-20220531105707-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:57:51.693919   11089 notify.go:193] Checking for updates...
	I0531 10:57:51.714943   11089 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:57:51.788794   11089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:57:51.852152   11089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:57:51.911000   11089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:57:51.968751   11089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:57:51.990479   11089 config.go:178] Loaded profile config "kubernetes-upgrade-20220531105258-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:57:51.990507   11089 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0531 10:57:51.990532   11089 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:57:52.072997   11089 docker.go:137] docker version: linux-20.10.14
	I0531 10:57:52.073131   11089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:57:52.213018   11089 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:57:52.136952967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0531 10:57:51.418341   10983 addons.go:165] addon default-storageclass should already be in state true
	I0531 10:57:51.438958   10983 host.go:66] Checking if "kubernetes-upgrade-20220531105258-2169" exists ...
	I0531 10:57:51.439015   10983 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 10:57:51.439025   10983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 10:57:51.439079   10983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:57:51.439887   10983 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531105258-2169 --format={{.State.Status}}
	I0531 10:57:51.444840   10983 api_server.go:51] waiting for apiserver process to appear ...
	I0531 10:57:51.444916   10983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 10:57:51.457381   10983 api_server.go:71] duration metric: took 247.275666ms to wait for apiserver process to appear ...
	I0531 10:57:51.457410   10983 api_server.go:87] waiting for apiserver healthz status ...
	I0531 10:57:51.457422   10983 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63924/healthz ...
	I0531 10:57:51.464182   10983 api_server.go:266] https://127.0.0.1:63924/healthz returned 200:
	ok
	I0531 10:57:51.465619   10983 api_server.go:140] control plane version: v1.23.6
	I0531 10:57:51.465628   10983 api_server.go:130] duration metric: took 8.213944ms to wait for apiserver health ...
	I0531 10:57:51.465633   10983 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 10:57:51.471388   10983 system_pods.go:59] 5 kube-system pods found
	I0531 10:57:51.471411   10983 system_pods.go:61] "etcd-kubernetes-upgrade-20220531105258-2169" [44e2a442-cf65-4ed3-a84e-f3a25e2918b0] Pending
	I0531 10:57:51.471426   10983 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220531105258-2169" [d017c42f-2bde-4edc-a92d-618e6d18c996] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 10:57:51.471438   10983 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220531105258-2169" [1b67d833-7b71-4fe8-a556-b6183765c6bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 10:57:51.471448   10983 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220531105258-2169" [9d20a0c8-b21f-4149-aa00-d6676eab78ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 10:57:51.471463   10983 system_pods.go:61] "storage-provisioner" [9bdaa67d-c850-4207-9c6e-4b8ebd93bc3e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 10:57:51.471471   10983 system_pods.go:74] duration metric: took 5.832458ms to wait for pod list to return data ...
	I0531 10:57:51.471480   10983 kubeadm.go:572] duration metric: took 261.383894ms to wait for : map[apiserver:true system_pods:true] ...
	I0531 10:57:51.471491   10983 node_conditions.go:102] verifying NodePressure condition ...
	I0531 10:57:51.474938   10983 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 10:57:51.474954   10983 node_conditions.go:123] node cpu capacity is 6
	I0531 10:57:51.474966   10983 node_conditions.go:105] duration metric: took 3.470714ms to run NodePressure ...
	I0531 10:57:51.474976   10983 start.go:213] waiting for startup goroutines ...
	I0531 10:57:51.583429   10983 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 10:57:51.583442   10983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 10:57:51.583496   10983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531105258-2169
	I0531 10:57:51.583606   10983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63925 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:57:51.675175   10983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 10:57:51.695271   10983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63925 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/kubernetes-upgrade-20220531105258-2169/id_rsa Username:docker}
	I0531 10:57:51.785102   10983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 10:57:52.255164   11089 out.go:177] * Using the docker driver based on user configuration
	I0531 10:57:52.329172   10983 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 10:57:52.371222   10983 addons.go:417] enableAddons completed in 1.16113074s
	I0531 10:57:52.401814   10983 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 10:57:52.423069   10983 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220531105258-2169" cluster and "default" namespace by default
	I0531 10:57:52.350232   11089 start.go:284] selected driver: docker
	I0531 10:57:52.350243   11089 start.go:806] validating driver "docker" against <nil>
	I0531 10:57:52.350264   11089 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:57:52.350501   11089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:57:52.551339   11089 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:57:52.470430502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:57:52.551450   11089 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0531 10:57:52.551458   11089 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0531 10:57:52.551467   11089 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 10:57:52.553524   11089 start_flags.go:373] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0531 10:57:52.553638   11089 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 10:57:52.575319   11089 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 10:57:52.596188   11089 cni.go:95] Creating CNI manager for ""
	I0531 10:57:52.596199   11089 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:57:52.596215   11089 start_flags.go:306] config:
	{Name:NoKubernetes-20220531105707-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:NoKubernetes-20220531105707-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:57:52.596299   11089 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0531 10:57:52.639509   11089 out.go:177] * Starting minikube without Kubernetes NoKubernetes-20220531105707-2169 in cluster NoKubernetes-20220531105707-2169
	I0531 10:57:52.681970   11089 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:57:52.703114   11089 out.go:177] * Pulling base image ...
	I0531 10:57:52.745005   11089 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0531 10:57:52.745015   11089 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:57:52.810666   11089 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:57:52.810683   11089 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	W0531 10:57:52.817405   11089 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0531 10:57:52.818039   11089 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/NoKubernetes-20220531105707-2169/config.json ...
	I0531 10:57:52.818088   11089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/NoKubernetes-20220531105707-2169/config.json: {Name:mk0974302586521cf9a4cd9a7983b75dc23736a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:57:52.818447   11089 cache.go:206] Successfully downloaded all kic artifacts
	I0531 10:57:52.818491   11089 start.go:352] acquiring machines lock for NoKubernetes-20220531105707-2169: {Name:mk222b6c9770f7ceb183416ca5bbdd20217ef6e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:57:52.818756   11089 start.go:356] acquired machines lock for "NoKubernetes-20220531105707-2169" in 252.501µs
	I0531 10:57:52.818779   11089 start.go:91] Provisioning new machine with config: &{Name:NoKubernetes-20220531105707-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-20220531105707-2169 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 Kubern
etesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 10:57:52.818830   11089 start.go:131] createHost starting for "" (driver="docker")
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 17:57:15 UTC, end at Tue 2022-05-31 17:57:54 UTC. --
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.762543840Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.762575700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.762591557Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.762600425Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.763576353Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.763737835Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.763813554Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.763908971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.810617958Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.820382407Z" level=info msg="Loading containers: start."
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.900410080Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.934702925Z" level=info msg="Loading containers: done."
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.944860417Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.944934595Z" level=info msg="Daemon has completed initialization"
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.972898316Z" level=info msg="API listen on [::]:2376"
	May 31 17:57:27 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:27.975475272Z" level=info msg="API listen on /var/run/docker.sock"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.099634375Z" level=info msg="ignoring event" container=f60b7a76527ad878dc4cefd011dc806d364b78635d5dfdb2fefe5c8452c806bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.108823595Z" level=info msg="ignoring event" container=63733e0de90499ea60feba15ea3fcb606567329ea0e7cc0a380bc7275c16a724 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.111337835Z" level=info msg="ignoring event" container=64490d28f4a8abde05ceb6e79689481e704bb68dc85bc1b085fae34581378faf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.113086135Z" level=info msg="ignoring event" container=06be6dacda3f494b504ac6c01d941538b4e6c512b279e559df418080085a54f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.115000493Z" level=info msg="ignoring event" container=541921180c14770b16fbee0b7489aaf58fd8042af038ab7fa302a2d0c65a4d39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:44 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:44.115792346Z" level=info msg="ignoring event" container=9b0c7939375736dc7f33812baf9ab68d4f846336b9a8da6aa4f3fe27032df7b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:45 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:45.249821008Z" level=info msg="ignoring event" container=19d0208d98e803092ddcbec083687be99a5e8e2a4f4860643befdd35ef921419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:57:45 kubernetes-upgrade-20220531105258-2169 dockerd[525]: time="2022-05-31T17:57:45.280118343Z" level=info msg="ignoring event" container=290827d2bedc755a8759cec80c5828192aaa51d2531b04d1ec3d099c906649d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5f8dfcdffe695       8fa62c12256df       7 seconds ago       Running             kube-apiserver            1                   688a92b486db4
	3752949c280c2       595f327f224a4       7 seconds ago       Running             kube-scheduler            1                   a0ce81011617a
	429d6cb0290b9       25f8c7f3da61c       10 seconds ago      Running             etcd                      1                   5a4161fbca3b3
	24b6ad3c3d29d       df7b72818ad2e       10 seconds ago      Running             kube-controller-manager   1                   c0c0d43379f94
	290827d2bedc7       8fa62c12256df       24 seconds ago      Exited              kube-apiserver            0                   9b0c793937573
	19d0208d98e80       595f327f224a4       24 seconds ago      Exited              kube-scheduler            0                   64490d28f4a8a
	06be6dacda3f4       df7b72818ad2e       24 seconds ago      Exited              kube-controller-manager   0                   f60b7a76527ad
	541921180c147       25f8c7f3da61c       24 seconds ago      Exited              etcd                      0                   63733e0de9049
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220531105258-2169
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220531105258-2169
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:57:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220531105258-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 17:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 17:57:49 +0000   Tue, 31 May 2022 17:57:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 17:57:49 +0000   Tue, 31 May 2022 17:57:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 17:57:49 +0000   Tue, 31 May 2022 17:57:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 17:57:49 +0000   Tue, 31 May 2022 17:57:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    kubernetes-upgrade-20220531105258-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                0dbe7505-8433-485e-85be-0a6a3f5341f2
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220531105258-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220531105258-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220531105258-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220531105258-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 25s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet  Node kubernetes-upgrade-20220531105258-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet  Node kubernetes-upgrade-20220531105258-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet  Node kubernetes-upgrade-20220531105258-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001413] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001093] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001737] FS-Cache: N-cookie d=0000000038acf5de n=000000008809a18b
	[  +0.001435] FS-Cache: N-key=[8] '751ad70200000000'
	[  +0.001928] FS-Cache: Duplicate cookie detected
	[  +0.001010] FS-Cache: O-cookie c=000000002a5eed4b [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001783] FS-Cache: O-cookie d=0000000038acf5de n=000000006a3a9612
	[  +0.001418] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001104] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001740] FS-Cache: N-cookie d=0000000038acf5de n=000000002ffefb64
	[  +0.001430] FS-Cache: N-key=[8] '751ad70200000000'
	[  +3.329767] FS-Cache: Duplicate cookie detected
	[  +0.001037] FS-Cache: O-cookie c=00000000b56bf5b4 [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001856] FS-Cache: O-cookie d=0000000038acf5de n=00000000b91e189d
	[  +0.001481] FS-Cache: O-key=[8] '741ad70200000000'
	[  +0.001123] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001784] FS-Cache: N-cookie d=0000000038acf5de n=00000000eccdb4bc
	[  +0.001461] FS-Cache: N-key=[8] '741ad70200000000'
	[  +0.431860] FS-Cache: Duplicate cookie detected
	[  +0.001026] FS-Cache: O-cookie c=000000004a859abe [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001835] FS-Cache: O-cookie d=0000000038acf5de n=00000000e6b4c68e
	[  +0.001495] FS-Cache: O-key=[8] '811ad70200000000'
	[  +0.001101] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001734] FS-Cache: N-cookie d=0000000038acf5de n=00000000648703d1
	[  +0.001443] FS-Cache: N-key=[8] '811ad70200000000'
	
	* 
	* ==> etcd [429d6cb0290b] <==
	* {"level":"warn","ts":"2022-05-31T17:57:45.971Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.971Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.972Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.973Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.995Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.996Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.996Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.997Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.997Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.997Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.999Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:45.999Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.000Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.000Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.001Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.001Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.001Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.002Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.002Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.002Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.003Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.003Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.004Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.004Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2022-05-31T17:57:46.004Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	
	* 
	* ==> etcd [541921180c14] <==
	* {"level":"info","ts":"2022-05-31T17:57:31.799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:57:31.799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:57:31.800Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:kubernetes-upgrade-20220531105258-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:57:31.801Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:57:31.801Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:57:31.801Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:57:31.801Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:57:31.801Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:57:35.359Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.976813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:57:35.360Z","caller":"traceutil/trace.go:171","msg":"trace[866589409] range","detail":"{range_begin:/registry/roles/kube-public/system:controller:bootstrap-signer; range_end:; response_count:0; response_revision:196; }","duration":"107.122577ms","start":"2022-05-31T17:57:35.252Z","end":"2022-05-31T17:57:35.360Z","steps":["trace[866589409] 'agreement among raft nodes before linearized reading'  (duration: 32.712435ms)","trace[866589409] 'range keys from in-memory index tree'  (duration: 74.195137ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T17:57:44.017Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-31T17:57:44.017Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"kubernetes-upgrade-20220531105258-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/05/31 17:57:44 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/31 17:57:44 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-31T17:57:44.024Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"warn","ts":"2022-05-31T17:57:44.025Z","caller":"etcdhttp/metrics.go:200","msg":"serving /health false; Range fails","error":"etcdserver: server stopped"}
	{"level":"warn","ts":"2022-05-31T17:57:44.025Z","caller":"etcdhttp/metrics.go:79","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RANGE ERROR:etcdserver: server stopped\"}","status-code":503}
	{"level":"info","ts":"2022-05-31T17:57:44.025Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:57:44.028Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:57:44.029Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"kubernetes-upgrade-20220531105258-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:57:54 up 45 min,  0 users,  load average: 2.10, 1.36, 0.93
	Linux kubernetes-upgrade-20220531105258-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [290827d2bedc] <==
	* I0531 17:57:44.035497       1 controller.go:186] Shutting down kubernetes service endpoint reconciler
	I0531 17:57:44.035551       1 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
	I0531 17:57:44.035565       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0531 17:57:44.035578       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0531 17:57:44.035585       1 available_controller.go:503] Shutting down AvailableConditionController
	I0531 17:57:44.035616       1 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 17:57:44.035627       1 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 17:57:44.035644       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0531 17:57:44.035657       1 apf_controller.go:326] Shutting down API Priority and Fairness config worker
	I0531 17:57:44.035666       1 establishing_controller.go:87] Shutting down EstablishingController
	I0531 17:57:44.035674       1 naming_controller.go:302] Shutting down NamingConditionController
	I0531 17:57:44.035680       1 controller.go:122] Shutting down OpenAPI controller
	I0531 17:57:44.035682       1 storage_flowcontrol.go:150] APF bootstrap ensurer is exiting
	I0531 17:57:44.035686       1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
	I0531 17:57:44.035692       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0531 17:57:44.035700       1 controller.go:89] Shutting down OpenAPI AggregationController
	I0531 17:57:44.035711       1 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 17:57:44.035726       1 dynamic_serving_content.go:145] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0531 17:57:44.035762       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 17:57:44.035773       1 dynamic_serving_content.go:145] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0531 17:57:44.035801       1 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 17:57:44.035832       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0531 17:57:44.035843       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0531 17:57:44.035579       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0531 17:57:44.035998       1 secure_serving.go:311] Stopped listening on [::]:8443
	
	* 
	* ==> kube-apiserver [5f8dfcdffe69] <==
	* I0531 17:57:49.535116       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0531 17:57:49.535172       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0531 17:57:49.535179       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0531 17:57:49.535190       1 available_controller.go:491] Starting AvailableConditionController
	I0531 17:57:49.535192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0531 17:57:49.539324       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 17:57:49.541364       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0531 17:57:49.576932       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 17:57:49.672753       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:57:49.672956       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:57:49.673137       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:57:49.673196       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:57:49.673438       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:57:49.673470       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:57:49.673459       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:57:49.675433       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:57:49.698750       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 17:57:50.529459       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:57:50.529477       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:57:50.535252       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:57:51.137889       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:57:51.144969       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:57:51.166756       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:57:51.180255       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:57:51.184524       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [06be6dacda3f] <==
	* I0531 17:57:35.984996       1 node_lifecycle_controller.go:539] Starting node controller
	I0531 17:57:35.985200       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0531 17:57:35.993344       1 controllermanager.go:605] Started "endpoint"
	I0531 17:57:35.993437       1 endpoints_controller.go:193] Starting endpoint controller
	I0531 17:57:35.993443       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0531 17:57:36.040985       1 shared_informer.go:247] Caches are synced for tokens 
	I0531 17:57:36.143387       1 controllermanager.go:605] Started "replicationcontroller"
	I0531 17:57:36.143438       1 replica_set.go:186] Starting replicationcontroller controller
	I0531 17:57:36.143444       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0531 17:57:36.293921       1 controllermanager.go:605] Started "job"
	I0531 17:57:36.293938       1 job_controller.go:184] Starting job controller
	I0531 17:57:36.293987       1 shared_informer.go:240] Waiting for caches to sync for job
	I0531 17:57:36.443891       1 controllermanager.go:605] Started "deployment"
	I0531 17:57:36.443941       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I0531 17:57:36.443947       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0531 17:57:36.593042       1 controllermanager.go:605] Started "cronjob"
	I0531 17:57:36.593092       1 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
	I0531 17:57:36.593099       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0531 17:57:36.850259       1 controllermanager.go:605] Started "namespace"
	I0531 17:57:36.850308       1 namespace_controller.go:200] Starting namespace controller
	I0531 17:57:36.850314       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0531 17:57:36.994115       1 controllermanager.go:605] Started "csrapproving"
	I0531 17:57:36.994150       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0531 17:57:36.994235       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0531 17:57:37.042908       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [24b6ad3c3d29] <==
	* I0531 17:57:51.725253       1 controllermanager.go:605] Started "attachdetach"
	I0531 17:57:51.725307       1 attach_detach_controller.go:328] Starting attach detach controller
	I0531 17:57:51.725313       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0531 17:57:51.877646       1 controllermanager.go:605] Started "ephemeral-volume"
	I0531 17:57:51.877694       1 controller.go:170] Starting ephemeral volume controller
	I0531 17:57:51.877700       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0531 17:57:51.954272       1 controllermanager.go:605] Started "endpoint"
	I0531 17:57:51.954323       1 endpoints_controller.go:193] Starting endpoint controller
	I0531 17:57:51.954328       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0531 17:57:51.994862       1 controllermanager.go:605] Started "endpointslicemirroring"
	I0531 17:57:51.994937       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0531 17:57:51.994944       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0531 17:57:52.020547       1 controllermanager.go:605] Started "serviceaccount"
	I0531 17:57:52.020635       1 serviceaccounts_controller.go:117] Starting service account controller
	I0531 17:57:52.020684       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0531 17:57:52.220760       1 controllermanager.go:605] Started "disruption"
	I0531 17:57:52.220815       1 disruption.go:363] Starting disruption controller
	I0531 17:57:52.220821       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0531 17:57:52.377810       1 controllermanager.go:605] Started "namespace"
	I0531 17:57:52.377867       1 namespace_controller.go:200] Starting namespace controller
	I0531 17:57:52.377873       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0531 17:57:52.521439       1 controllermanager.go:605] Started "statefulset"
	I0531 17:57:52.521460       1 stateful_set.go:147] Starting stateful set controller
	I0531 17:57:52.521550       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0531 17:57:52.570777       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [19d0208d98e8] <==
	* E0531 17:57:33.918538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:57:33.918552       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:57:33.918567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:57:33.919094       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:57:33.919151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:57:33.917159       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:57:33.920771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:57:34.741167       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:57:34.741215       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:57:34.817821       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:57:34.818030       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:57:34.880020       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:57:34.880057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:57:34.949262       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:57:34.949354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:57:35.004731       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:57:35.004767       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:57:35.018253       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:57:35.018286       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:57:35.112346       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:57:35.112396       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 17:57:37.011129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0531 17:57:44.038329       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 17:57:44.038537       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0531 17:57:44.038801       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [3752949c280c] <==
	* I0531 17:57:47.901263       1 serving.go:348] Generated self-signed cert in-memory
	W0531 17:57:49.575723       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 17:57:49.575824       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:57:49.575845       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 17:57:49.579129       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 17:57:49.590262       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 17:57:49.591771       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 17:57:49.591898       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 17:57:49.592269       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 17:57:49.592309       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 17:57:49.597856       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:57:49.597911       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 17:57:49.692517       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:57:15 UTC, end at Tue 2022-05-31 17:57:56 UTC. --
	May 31 17:57:47 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:47.672174    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:47 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:47.772801    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:47 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:47.873838    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:47.993014    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.098461    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.199580    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.299678    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.378738    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.479867    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.580149    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.680930    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.781306    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.882437    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:48 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:48.983592    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.085043    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.186518    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.287619    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.388723    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.489693    2657 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220531105258-2169\" not found"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: I0531 17:57:49.680226    2657 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220531105258-2169"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: I0531 17:57:49.680344    2657 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220531105258-2169"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.683730    2657 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-20220531105258-2169\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-20220531105258-2169"
	May 31 17:57:49 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: E0531 17:57:49.683730    2657 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-20220531105258-2169\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-20220531105258-2169"
	May 31 17:57:50 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: I0531 17:57:50.532380    2657 apiserver.go:52] "Watching apiserver"
	May 31 17:57:50 kubernetes-upgrade-20220531105258-2169 kubelet[2657]: I0531 17:57:50.595682    2657 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220531105258-2169 -n kubernetes-upgrade-20220531105258-2169
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220531105258-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220531105258-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.587920791s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220531105258-2169 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220531105258-2169 describe pod storage-provisioner: exit status 1 (49.881443ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220531105258-2169 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220531105258-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220531105258-2169

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220531105258-2169: (3.170855585s)
--- FAIL: TestKubernetesUpgrade (303.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (50.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker 
E0531 10:52:11.585676    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker : exit status 78 (35.713515752s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220531105207-2169] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220531105207-2169
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220531105207-2169" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 78.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 87.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 159.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 294.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 311.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 383.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 507.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:52:25.881918491 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220531105207-2169" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:52:42.001917374 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker : exit status 70 (4.115943711s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220531105207-2169] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220531105207-2169
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220531105207-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2430514635.exe start -p missing-upgrade-20220531105207-2169 --memory=2200 --driver=docker : exit status 70 (4.367056339s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220531105207-2169] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220531105207-2169
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220531105207-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-05-31 10:52:55.465816 -0700 PDT m=+2449.035656745
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220531105207-2169
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220531105207-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e",
	        "Created": "2022-05-31T17:52:34.068169554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:52:34.313089958Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e/hostname",
	        "HostsPath": "/var/lib/docker/containers/341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e/hosts",
	        "LogPath": "/var/lib/docker/containers/341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e/341e869d5fd1a6f07e8f5468680bdcaeb5e291279ae8aa459b760fa1c359791e-json.log",
	        "Name": "/missing-upgrade-20220531105207-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-20220531105207-2169:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5d1cb391852c6c93502362813ed93cbc893841afe89a0fcda17f8c849cc65447-init/diff:/var/lib/docker/overlay2/68730985f7cfd3b645dffaaf625a84e0f45a2e522a7bbd35c74f3e961455c955/diff:/var/lib/docker/overlay2/086a9a5d11913cdd684dceb8ac095d883dd96aeffd0e2f279790b7c3992d505d/diff:/var/lib/docker/overlay2/4a7767ee605e9d3846f50062d68dbb144b6c872e261ea175128352b6a2008186/diff:/var/lib/docker/overlay2/90cf826a4010a4a3587a817d18da915c42b4f8d827d97ec08235753517cf7cba/diff:/var/lib/docker/overlay2/eaa2a7e56e26bbbbe52325d4dd17430b5f88783e1d7106afef9cb75f9f64673a/diff:/var/lib/docker/overlay2/e79fa306793a060f9fc9b0e6d7b5ef03378cf4fbe65d7c89e8f0ccfcf0562282/diff:/var/lib/docker/overlay2/bba27b2a99740d20b41b7850c0375cecc063e583b9afd93a82a7cf23a44cb8f1/diff:/var/lib/docker/overlay2/6cf665e8f6ea0dc4d08cacc5e06e998a6fd9208a2e8197f3d9a7fc6f28369cbc/diff:/var/lib/docker/overlay2/c7213236b6f74adfad523b3a0745db25c9c3b5aaa7be452e8c7562ac9af55529/diff:/var/lib/docker/overlay2/e6b28f
3ff5c1a7df3787620c5367e76e5d082a2719852854a0059452497aac2d/diff:/var/lib/docker/overlay2/c68b5a0b50ed2410ef2428f9ca77e4af1a8ff0f3c90c1ba30ef5f42e7c2f0fe3/diff:/var/lib/docker/overlay2/3062e3729948d2242933a53d46e139d56542622bc84399d578827874566ec45d/diff:/var/lib/docker/overlay2/5ea2fa0caf63c907fa5f7230a4d86016224b7a8090e21ccd0fafbaacc9b02989/diff:/var/lib/docker/overlay2/d321375c7b5f3519273186dddf87e312e97664c8899baad733ed047158e48167/diff:/var/lib/docker/overlay2/51b4d7bff48b339142e73ea6bf81882193895d7beee21763c05808dc42417831/diff:/var/lib/docker/overlay2/6cc3fdbbe55a5101cad2d2f3a19f351f440ca4ce572bd9590d534a0d4e756871/diff:/var/lib/docker/overlay2/c7b81ca26ce547908b8589973f707ab55de536d55f4e91ff33c4ad44c6335157/diff:/var/lib/docker/overlay2/54518fc6c0f4bd67872c1a8f18d57e28e9977220eb6b786882bdee74547cfd52/diff:/var/lib/docker/overlay2/a70efa960030191dd9226c96dd524ab1af6b4c40f8037297a048af6ce65e7b91/diff:/var/lib/docker/overlay2/4287ba7d9b601768fcd455102b8577d6e47986dacfe67932cb862726d4269593/diff:/var/lib/d
ocker/overlay2/8cc5c99c5858de4fd5685625834a50fc3618c82d09969525ed7b0605000309eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d1cb391852c6c93502362813ed93cbc893841afe89a0fcda17f8c849cc65447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d1cb391852c6c93502362813ed93cbc893841afe89a0fcda17f8c849cc65447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d1cb391852c6c93502362813ed93cbc893841afe89a0fcda17f8c849cc65447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220531105207-2169",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220531105207-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220531105207-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220531105207-2169",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220531105207-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4ca0697de48dddc2b802439a562652b1a8ad4a41444f3dec6ef725f3a86fe21",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62162"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c4ca0697de48",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "ce22c718fa1f5aab08e2e363659468b98ce19bce3d3d6ad97825c42ca811cc49",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "4a63e4a8a5c8fc043bcd14188ed64ac2860bba2a6c9a76a1f934032a0376ca21",
	                    "EndpointID": "ce22c718fa1f5aab08e2e363659468b98ce19bce3d3d6ad97825c42ca811cc49",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220531105207-2169 -n missing-upgrade-20220531105207-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220531105207-2169 -n missing-upgrade-20220531105207-2169: exit status 6 (423.828837ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:52:55.952759    9923 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220531105207-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220531105207-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220531105207-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220531105207-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220531105207-2169: (2.54828176s)
--- FAIL: TestMissingContainerUpgrade (50.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker 
E0531 10:54:41.068975    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker : exit status 70 (34.738666325s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220531105422-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig475724328
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:54:38.860088586 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220531105422-2169" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:54:55.559423506 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220531105422-2169", then "minikube start -p stopped-upgrade-20220531105422-2169 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 281.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 17:54:55.559423506 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker : exit status 70 (4.509868967s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220531105422-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1506650871
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220531105422-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3875730645.exe start -p stopped-upgrade-20220531105422-2169 --memory=2200 --vm-driver=docker : exit status 70 (4.474522265s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220531105422-2169] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1056476759
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220531105422-2169" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.38s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220531105516-2169 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220531105516-2169 --output=json --layout=cluster: exit status 2 (16.099992164s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220531105516-2169","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220531105516-2169","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220531105516-2169
helpers_test.go:235: (dbg) docker inspect pause-20220531105516-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f",
	        "Created": "2022-05-31T17:55:22.915644883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 135349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:55:23.219405669Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f/hosts",
	        "LogPath": "/var/lib/docker/containers/3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f/3a9c44f440bcbd5b1b0cb568ab7ccb235f23e5ce2522242333d3467281d5936f-json.log",
	        "Name": "/pause-20220531105516-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220531105516-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220531105516-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fe2ce8ec1cc99aae173c8a640ac4b0b4b28d3ea5edf8d6c7a2a850d9c90476db-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe2ce8ec1cc99aae173c8a640ac4b0b4b28d3ea5edf8d6c7a2a850d9c90476db/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe2ce8ec1cc99aae173c8a640ac4b0b4b28d3ea5edf8d6c7a2a850d9c90476db/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe2ce8ec1cc99aae173c8a640ac4b0b4b28d3ea5edf8d6c7a2a850d9c90476db/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220531105516-2169",
	                "Source": "/var/lib/docker/volumes/pause-20220531105516-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220531105516-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220531105516-2169",
	                "name.minikube.sigs.k8s.io": "pause-20220531105516-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e4f43575de23ca4c4edcd57ed71ffe50232a7587b645fdec3a0195e8e2bc518",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63403"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9e4f43575de2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220531105516-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a9c44f440bc",
	                        "pause-20220531105516-2169"
	                    ],
	                    "NetworkID": "d1d08c7feefec7797d586a2056fa41ed807fe8afebdbdcee1d7ef2878b8ef587",
	                    "EndpointID": "cb2775187936e3494c68df2299e950af062719ffedbd26db36e0199873d56908",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220531105516-2169 -n pause-20220531105516-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220531105516-2169 -n pause-20220531105516-2169: exit status 2 (16.097363003s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220531105516-2169 logs -n 25
E0531 10:56:43.950139    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220531105516-2169 logs -n 25: (14.368609708s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                     | force-systemd-env-20220531104951-2169  | jenkins | v1.26.0-beta.1 | 31 May 22 10:49 PDT | 31 May 22 10:50 PDT |
	|         | force-systemd-env-20220531104951-2169  |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5   |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | force-systemd-env-20220531104951-2169  | force-systemd-env-20220531104951-2169  | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| delete  | -p                                     | offline-docker-20220531104925-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | offline-docker-20220531104925-2169     |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-env-20220531104951-2169  | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | force-systemd-env-20220531104951-2169  |                                        |         |                |                     |                     |
	| start   | -p                                     | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | docker-flags-20220531105018-2169       |                                        |         |                |                     |                     |
	|         | --cache-images=false                   |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=false                           |                                        |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                   |                                        |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                                        |         |                |                     |                     |
	|         | --docker-opt=debug                     |                                        |         |                |                     |                     |
	|         | --docker-opt=icc=true                  |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | force-systemd-flag-20220531105017-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | force-systemd-flag-20220531105017-2169 |                                        |         |                |                     |                     |
	|         | --memory=2048 --force-systemd          |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220531105018-2169       | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | ssh sudo systemctl show                |                                        |         |                |                     |                     |
	|         | docker --property=Environment          |                                        |         |                |                     |                     |
	|         | --no-pager                             |                                        |         |                |                     |                     |
	| ssh     | force-systemd-flag-20220531105017-2169 | force-systemd-flag-20220531105017-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220531105018-2169       | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | ssh sudo systemctl show docker         |                                        |         |                |                     |                     |
	|         | --property=ExecStart --no-pager        |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-flag-20220531105017-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | force-systemd-flag-20220531105017-2169 |                                        |         |                |                     |                     |
	| delete  | -p                                     | docker-flags-20220531105018-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:50 PDT |
	|         | docker-flags-20220531105018-2169       |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:50 PDT | 31 May 22 10:51 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m     |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220531105047-2169       | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220531105047-2169       | jenkins | v1.26.0-beta.1 | 31 May 22 10:51 PDT | 31 May 22 10:51 PDT |
	|         | cert-options-20220531105047-2169       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220531105117-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:52 PDT | 31 May 22 10:52 PDT |
	|         | running-upgrade-20220531105117-2169    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220531105207-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:52 PDT | 31 May 22 10:52 PDT |
	|         | missing-upgrade-20220531105207-2169    |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:54 PDT | 31 May 22 10:54 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220531105047-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:54 PDT | 31 May 22 10:54 PDT |
	|         | cert-expiration-20220531105047-2169    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220531105422-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | stopped-upgrade-20220531105422-2169    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220531105422-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | stopped-upgrade-20220531105422-2169    |                                        |         |                |                     |                     |
	| start   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:55 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:55 PDT | 31 May 22 10:56 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220531105516-2169           | pause-20220531105516-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 10:56 PDT | 31 May 22 10:56 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 10:55:54
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 10:55:54.821872   10565 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:55:54.822093   10565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:55:54.822098   10565 out.go:309] Setting ErrFile to fd 2...
	I0531 10:55:54.822102   10565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:55:54.822206   10565 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:55:54.822461   10565 out.go:303] Setting JSON to false
	I0531 10:55:54.837546   10565 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3323,"bootTime":1654016431,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:55:54.837654   10565 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:55:54.859629   10565 out.go:177] * [pause-20220531105516-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:55:54.881568   10565 notify.go:193] Checking for updates...
	I0531 10:55:54.903323   10565 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:55:54.925566   10565 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:55:54.946728   10565 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:55:54.968454   10565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:55:54.989652   10565 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:55:55.011707   10565 config.go:178] Loaded profile config "pause-20220531105516-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:55:55.012068   10565 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:55:55.085698   10565 docker.go:137] docker version: linux-20.10.14
	I0531 10:55:55.085842   10565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:55:55.211526   10565 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:56 SystemTime:2022-05-31 17:55:55.147736718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:55:55.254101   10565 out.go:177] * Using the docker driver based on existing profile
	I0531 10:55:55.275286   10565 start.go:284] selected driver: docker
	I0531 10:55:55.275320   10565 start.go:806] validating driver "docker" against &{Name:pause-20220531105516-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220531105516-2169 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false}
	I0531 10:55:55.275485   10565 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:55:55.275796   10565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:55:55.400989   10565 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:56 SystemTime:2022-05-31 17:55:55.338807213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:55:55.403055   10565 cni.go:95] Creating CNI manager for ""
	I0531 10:55:55.403069   10565 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:55:55.403085   10565 start_flags.go:306] config:
	{Name:pause-20220531105516-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220531105516-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:55:55.446890   10565 out.go:177] * Starting control plane node pause-20220531105516-2169 in cluster pause-20220531105516-2169
	I0531 10:55:55.468941   10565 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:55:55.490754   10565 out.go:177] * Pulling base image ...
	I0531 10:55:55.532987   10565 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 10:55:55.533041   10565 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:55:55.533072   10565 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 10:55:55.533093   10565 cache.go:57] Caching tarball of preloaded images
	I0531 10:55:55.533324   10565 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 10:55:55.533359   10565 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 10:55:55.534351   10565 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/config.json ...
	I0531 10:55:55.597925   10565 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:55:55.597940   10565 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 10:55:55.597950   10565 cache.go:206] Successfully downloaded all kic artifacts
	I0531 10:55:55.598074   10565 start.go:352] acquiring machines lock for pause-20220531105516-2169: {Name:mk156932c12ac7bad25c1ee6f1f346c97be4d290 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:55:55.598144   10565 start.go:356] acquired machines lock for "pause-20220531105516-2169" in 53.71µs
	I0531 10:55:55.598162   10565 start.go:94] Skipping create...Using existing machine configuration
	I0531 10:55:55.598170   10565 fix.go:55] fixHost starting: 
	I0531 10:55:55.598408   10565 cli_runner.go:164] Run: docker container inspect pause-20220531105516-2169 --format={{.State.Status}}
	I0531 10:55:55.667588   10565 fix.go:103] recreateIfNeeded on pause-20220531105516-2169: state=Running err=<nil>
	W0531 10:55:55.667615   10565 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 10:55:55.689454   10565 out.go:177] * Updating the running docker "pause-20220531105516-2169" container ...
	I0531 10:55:55.731110   10565 machine.go:88] provisioning docker machine ...
	I0531 10:55:55.731140   10565 ubuntu.go:169] provisioning hostname "pause-20220531105516-2169"
	I0531 10:55:55.731214   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:55.801393   10565 main.go:134] libmachine: Using SSH client type: native
	I0531 10:55:55.801606   10565 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63405 <nil> <nil>}
	I0531 10:55:55.801618   10565 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220531105516-2169 && echo "pause-20220531105516-2169" | sudo tee /etc/hostname
	I0531 10:55:55.921501   10565 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220531105516-2169
	
	I0531 10:55:55.921573   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:55.991606   10565 main.go:134] libmachine: Using SSH client type: native
	I0531 10:55:55.991756   10565 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63405 <nil> <nil>}
	I0531 10:55:55.991769   10565 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220531105516-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220531105516-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220531105516-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 10:55:56.102285   10565 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 10:55:56.102306   10565 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 10:55:56.102327   10565 ubuntu.go:177] setting up certificates
	I0531 10:55:56.102352   10565 provision.go:83] configureAuth start
	I0531 10:55:56.102424   10565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220531105516-2169
	I0531 10:55:56.173187   10565 provision.go:138] copyHostCerts
	I0531 10:55:56.173268   10565 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 10:55:56.173279   10565 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 10:55:56.173376   10565 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 10:55:56.173582   10565 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 10:55:56.173590   10565 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 10:55:56.173645   10565 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 10:55:56.173801   10565 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 10:55:56.173807   10565 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 10:55:56.173863   10565 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 10:55:56.173979   10565 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.pause-20220531105516-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220531105516-2169]
	I0531 10:55:56.288398   10565 provision.go:172] copyRemoteCerts
	I0531 10:55:56.288464   10565 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 10:55:56.288512   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:56.358738   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:56.442183   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 10:55:56.459069   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0531 10:55:56.475848   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 10:55:56.492845   10565 provision.go:86] duration metric: configureAuth took 390.481455ms
	I0531 10:55:56.492858   10565 ubuntu.go:193] setting minikube options for container-runtime
	I0531 10:55:56.493000   10565 config.go:178] Loaded profile config "pause-20220531105516-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:55:56.493057   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:56.564259   10565 main.go:134] libmachine: Using SSH client type: native
	I0531 10:55:56.564449   10565 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63405 <nil> <nil>}
	I0531 10:55:56.564461   10565 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 10:55:56.675701   10565 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 10:55:56.675712   10565 ubuntu.go:71] root file system type: overlay
	I0531 10:55:56.675849   10565 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 10:55:56.675912   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:56.745634   10565 main.go:134] libmachine: Using SSH client type: native
	I0531 10:55:56.745801   10565 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63405 <nil> <nil>}
	I0531 10:55:56.745857   10565 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 10:55:56.867218   10565 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 10:55:56.867332   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:56.938576   10565 main.go:134] libmachine: Using SSH client type: native
	I0531 10:55:56.938744   10565 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63405 <nil> <nil>}
	I0531 10:55:56.938757   10565 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 10:55:57.053692   10565 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 10:55:57.053709   10565 machine.go:91] provisioned docker machine in 1.322605569s
	I0531 10:55:57.053717   10565 start.go:306] post-start starting for "pause-20220531105516-2169" (driver="docker")
	I0531 10:55:57.053755   10565 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 10:55:57.053817   10565 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 10:55:57.053865   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:57.123650   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:57.209048   10565 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 10:55:57.212699   10565 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 10:55:57.212721   10565 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 10:55:57.212729   10565 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 10:55:57.212736   10565 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 10:55:57.212743   10565 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 10:55:57.212869   10565 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 10:55:57.213037   10565 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 10:55:57.213177   10565 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 10:55:57.220594   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:55:57.237587   10565 start.go:309] post-start completed in 183.856677ms
	I0531 10:55:57.237664   10565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:55:57.237710   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:57.308444   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:57.389558   10565 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 10:55:57.394110   10565 fix.go:57] fixHost completed within 1.79595991s
	I0531 10:55:57.394121   10565 start.go:81] releasing machines lock for "pause-20220531105516-2169", held for 1.795992323s
	I0531 10:55:57.394200   10565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220531105516-2169
	I0531 10:55:57.464597   10565 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 10:55:57.464604   10565 ssh_runner.go:195] Run: systemctl --version
	I0531 10:55:57.464654   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:57.464671   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:57.539037   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:57.540730   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:57.619875   10565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 10:55:57.747129   10565 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:55:57.757131   10565 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 10:55:57.757193   10565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 10:55:57.768194   10565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 10:55:57.784742   10565 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 10:55:57.890861   10565 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 10:55:58.000224   10565 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 10:55:58.014744   10565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 10:55:58.104040   10565 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 10:55:58.114783   10565 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:55:58.153848   10565 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 10:55:58.212132   10565 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 10:55:58.212220   10565 cli_runner.go:164] Run: docker exec -t pause-20220531105516-2169 dig +short host.docker.internal
	I0531 10:55:58.338026   10565 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 10:55:58.338117   10565 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 10:55:58.342584   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:58.412170   10565 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 10:55:58.412234   10565 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:55:58.441311   10565 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 10:55:58.441327   10565 docker.go:541] Images already preloaded, skipping extraction
	I0531 10:55:58.441396   10565 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 10:55:58.470509   10565 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 10:55:58.470523   10565 cache_images.go:84] Images are preloaded, skipping loading
	I0531 10:55:58.470600   10565 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 10:55:58.543240   10565 cni.go:95] Creating CNI manager for ""
	I0531 10:55:58.543252   10565 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:55:58.543267   10565 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 10:55:58.543284   10565 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220531105516-2169 NodeName:pause-20220531105516-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 10:55:58.543405   10565 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20220531105516-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 10:55:58.543505   10565 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20220531105516-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:pause-20220531105516-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 10:55:58.543561   10565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 10:55:58.551162   10565 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 10:55:58.551207   10565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 10:55:58.559165   10565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0531 10:55:58.571869   10565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 10:55:58.584296   10565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0531 10:55:58.596809   10565 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 10:55:58.600655   10565 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169 for IP: 192.168.49.2
	I0531 10:55:58.600788   10565 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 10:55:58.600840   10565 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 10:55:58.600916   10565 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.key
	I0531 10:55:58.600990   10565 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/apiserver.key.dd3b5fb2
	I0531 10:55:58.601045   10565 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/proxy-client.key
	I0531 10:55:58.601254   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 10:55:58.601288   10565 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 10:55:58.601300   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 10:55:58.601332   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 10:55:58.601365   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 10:55:58.601410   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 10:55:58.601474   10565 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 10:55:58.601994   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 10:55:58.619495   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 10:55:58.636958   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 10:55:58.653846   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 10:55:58.672144   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 10:55:58.690321   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 10:55:58.709814   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 10:55:58.726695   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 10:55:58.748964   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 10:55:58.775449   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 10:55:58.794795   10565 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 10:55:58.815017   10565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 10:55:58.829378   10565 ssh_runner.go:195] Run: openssl version
	I0531 10:55:58.835326   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 10:55:58.844976   10565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 10:55:58.849674   10565 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 10:55:58.849719   10565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 10:55:58.855106   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 10:55:58.862467   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 10:55:58.870555   10565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 10:55:58.874219   10565 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 10:55:58.874269   10565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 10:55:58.879371   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 10:55:58.886980   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 10:55:58.895786   10565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:55:58.900278   10565 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:55:58.900324   10565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 10:55:58.905667   10565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 10:55:58.913313   10565 kubeadm.go:395] StartCluster: {Name:pause-20220531105516-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220531105516-2169 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false}
	I0531 10:55:58.913405   10565 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 10:55:58.941649   10565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 10:55:58.949429   10565 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 10:55:58.949442   10565 kubeadm.go:626] restartCluster start
	I0531 10:55:58.949499   10565 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 10:55:58.956889   10565 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 10:55:58.956952   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:59.027684   10565 kubeconfig.go:92] found "pause-20220531105516-2169" server: "https://127.0.0.1:63404"
	I0531 10:55:59.028159   10565 kapi.go:59] client config for pause-20220531105516-2169: &rest.Config{Host:"https://127.0.0.1:63404", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22c2180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 10:55:59.028678   10565 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 10:55:59.036319   10565 api_server.go:165] Checking apiserver status ...
	I0531 10:55:59.036379   10565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 10:55:59.045570   10565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1568/cgroup
	W0531 10:55:59.053136   10565 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1568/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 10:55:59.053164   10565 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63404/healthz ...
	I0531 10:55:59.058401   10565 api_server.go:266] https://127.0.0.1:63404/healthz returned 200:
	ok
	I0531 10:55:59.068588   10565 system_pods.go:86] 6 kube-system pods found
	I0531 10:55:59.068601   10565 system_pods.go:89] "coredns-64897985d-4s59z" [6426fc67-8009-4a55-a552-a949daedd33e] Running
	I0531 10:55:59.068605   10565 system_pods.go:89] "etcd-pause-20220531105516-2169" [3a6a2e78-715d-49cd-a9f5-e40eb5e9655b] Running
	I0531 10:55:59.068609   10565 system_pods.go:89] "kube-apiserver-pause-20220531105516-2169" [6373c815-4d5f-48d2-9704-f039ce864c7e] Running
	I0531 10:55:59.068613   10565 system_pods.go:89] "kube-controller-manager-pause-20220531105516-2169" [57b6bfd6-e106-4d90-b328-a1a319edcc23] Running
	I0531 10:55:59.068616   10565 system_pods.go:89] "kube-proxy-9dks8" [b02e6246-38d5-4d04-9c0d-3822fe8cd5eb] Running
	I0531 10:55:59.068620   10565 system_pods.go:89] "kube-scheduler-pause-20220531105516-2169" [5d579d6b-d751-4e1e-8f9c-f68dcda8cfe3] Running
	I0531 10:55:59.069800   10565 api_server.go:140] control plane version: v1.23.6
	I0531 10:55:59.069809   10565 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0531 10:55:59.069816   10565 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0531 10:55:59.069822   10565 kubeadm.go:630] restartCluster took 120.377519ms
	I0531 10:55:59.069826   10565 kubeadm.go:397] StartCluster complete in 156.524361ms
	I0531 10:55:59.069837   10565 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:55:59.069901   10565 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:55:59.070325   10565 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:55:59.071129   10565 kapi.go:59] client config for pause-20220531105516-2169: &rest.Config{Host:"https://127.0.0.1:63404", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22c2180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 10:55:59.073748   10565 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220531105516-2169" rescaled to 1
	I0531 10:55:59.073779   10565 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 10:55:59.073796   10565 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 10:55:59.073811   10565 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 10:55:59.073849   10565 addons.go:65] Setting storage-provisioner=true in profile "pause-20220531105516-2169"
	I0531 10:55:59.117807   10565 out.go:177] * Verifying Kubernetes components...
	I0531 10:55:59.073855   10565 addons.go:65] Setting default-storageclass=true in profile "pause-20220531105516-2169"
	I0531 10:55:59.073943   10565 config.go:178] Loaded profile config "pause-20220531105516-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:55:59.117831   10565 addons.go:153] Setting addon storage-provisioner=true in "pause-20220531105516-2169"
	I0531 10:55:59.123352   10565 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 10:55:59.138886   10565 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220531105516-2169"
	W0531 10:55:59.138895   10565 addons.go:165] addon storage-provisioner should already be in state true
	I0531 10:55:59.138917   10565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:55:59.138944   10565 host.go:66] Checking if "pause-20220531105516-2169" exists ...
	I0531 10:55:59.139164   10565 cli_runner.go:164] Run: docker container inspect pause-20220531105516-2169 --format={{.State.Status}}
	I0531 10:55:59.139262   10565 cli_runner.go:164] Run: docker container inspect pause-20220531105516-2169 --format={{.State.Status}}
	I0531 10:55:59.150012   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:59.217024   10565 kapi.go:59] client config for pause-20220531105516-2169: &rest.Config{Host:"https://127.0.0.1:63404", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/pause-20220531105516-2169/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22c2180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 10:55:59.219760   10565 addons.go:153] Setting addon default-storageclass=true in "pause-20220531105516-2169"
	W0531 10:55:59.219771   10565 addons.go:165] addon default-storageclass should already be in state true
	I0531 10:55:59.219789   10565 host.go:66] Checking if "pause-20220531105516-2169" exists ...
	I0531 10:55:59.220102   10565 cli_runner.go:164] Run: docker container inspect pause-20220531105516-2169 --format={{.State.Status}}
	I0531 10:55:59.243979   10565 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:55:59.265040   10565 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 10:55:59.265053   10565 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 10:55:59.265112   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:59.267105   10565 node_ready.go:35] waiting up to 6m0s for node "pause-20220531105516-2169" to be "Ready" ...
	I0531 10:55:59.270437   10565 node_ready.go:49] node "pause-20220531105516-2169" has status "Ready":"True"
	I0531 10:55:59.270449   10565 node_ready.go:38] duration metric: took 3.317581ms waiting for node "pause-20220531105516-2169" to be "Ready" ...
	I0531 10:55:59.270454   10565 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 10:55:59.275236   10565 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4s59z" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.282028   10565 pod_ready.go:92] pod "coredns-64897985d-4s59z" in "kube-system" namespace has status "Ready":"True"
	I0531 10:55:59.282037   10565 pod_ready.go:81] duration metric: took 6.788178ms waiting for pod "coredns-64897985d-4s59z" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.282045   10565 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.287311   10565 pod_ready.go:92] pod "etcd-pause-20220531105516-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 10:55:59.287323   10565 pod_ready.go:81] duration metric: took 5.272167ms waiting for pod "etcd-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.287331   10565 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.293374   10565 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 10:55:59.293387   10565 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 10:55:59.293459   10565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220531105516-2169
	I0531 10:55:59.293466   10565 pod_ready.go:92] pod "kube-apiserver-pause-20220531105516-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 10:55:59.293475   10565 pod_ready.go:81] duration metric: took 6.137494ms waiting for pod "kube-apiserver-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.293488   10565 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.299403   10565 pod_ready.go:92] pod "kube-controller-manager-pause-20220531105516-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 10:55:59.299418   10565 pod_ready.go:81] duration metric: took 5.919185ms waiting for pod "kube-controller-manager-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.299425   10565 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9dks8" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.340710   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:59.366131   10565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63405 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/pause-20220531105516-2169/id_rsa Username:docker}
	I0531 10:55:59.429717   10565 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 10:55:59.457209   10565 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 10:55:59.683242   10565 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 10:55:59.684878   10565 pod_ready.go:92] pod "kube-proxy-9dks8" in "kube-system" namespace has status "Ready":"True"
	I0531 10:55:59.704122   10565 addons.go:417] enableAddons completed in 630.322792ms
	I0531 10:55:59.704147   10565 pod_ready.go:81] duration metric: took 404.718033ms waiting for pod "kube-proxy-9dks8" in "kube-system" namespace to be "Ready" ...
	I0531 10:55:59.704168   10565 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:56:00.071370   10565 pod_ready.go:92] pod "kube-scheduler-pause-20220531105516-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 10:56:00.071379   10565 pod_ready.go:81] duration metric: took 367.209044ms waiting for pod "kube-scheduler-pause-20220531105516-2169" in "kube-system" namespace to be "Ready" ...
	I0531 10:56:00.071388   10565 pod_ready.go:38] duration metric: took 800.93275ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 10:56:00.071409   10565 api_server.go:51] waiting for apiserver process to appear ...
	I0531 10:56:00.071460   10565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 10:56:00.083232   10565 api_server.go:71] duration metric: took 1.009445287s to wait for apiserver process to appear ...
	I0531 10:56:00.083247   10565 api_server.go:87] waiting for apiserver healthz status ...
	I0531 10:56:00.083254   10565 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63404/healthz ...
	I0531 10:56:00.089344   10565 api_server.go:266] https://127.0.0.1:63404/healthz returned 200:
	ok
	I0531 10:56:00.090500   10565 api_server.go:140] control plane version: v1.23.6
	I0531 10:56:00.090507   10565 api_server.go:130] duration metric: took 7.256798ms to wait for apiserver health ...
	I0531 10:56:00.090512   10565 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 10:56:00.271966   10565 system_pods.go:59] 7 kube-system pods found
	I0531 10:56:00.271979   10565 system_pods.go:61] "coredns-64897985d-4s59z" [6426fc67-8009-4a55-a552-a949daedd33e] Running
	I0531 10:56:00.271983   10565 system_pods.go:61] "etcd-pause-20220531105516-2169" [3a6a2e78-715d-49cd-a9f5-e40eb5e9655b] Running
	I0531 10:56:00.271986   10565 system_pods.go:61] "kube-apiserver-pause-20220531105516-2169" [6373c815-4d5f-48d2-9704-f039ce864c7e] Running
	I0531 10:56:00.271990   10565 system_pods.go:61] "kube-controller-manager-pause-20220531105516-2169" [57b6bfd6-e106-4d90-b328-a1a319edcc23] Running
	I0531 10:56:00.271993   10565 system_pods.go:61] "kube-proxy-9dks8" [b02e6246-38d5-4d04-9c0d-3822fe8cd5eb] Running
	I0531 10:56:00.271997   10565 system_pods.go:61] "kube-scheduler-pause-20220531105516-2169" [5d579d6b-d751-4e1e-8f9c-f68dcda8cfe3] Running
	I0531 10:56:00.272003   10565 system_pods.go:61] "storage-provisioner" [e49a9253-f33c-408f-8cd4-1954e8b7b383] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 10:56:00.272008   10565 system_pods.go:74] duration metric: took 181.494851ms to wait for pod list to return data ...
	I0531 10:56:00.272015   10565 default_sa.go:34] waiting for default service account to be created ...
	I0531 10:56:00.470680   10565 default_sa.go:45] found service account: "default"
	I0531 10:56:00.470691   10565 default_sa.go:55] duration metric: took 198.674163ms for default service account to be created ...
	I0531 10:56:00.470696   10565 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 10:56:00.673398   10565 system_pods.go:86] 7 kube-system pods found
	I0531 10:56:00.673411   10565 system_pods.go:89] "coredns-64897985d-4s59z" [6426fc67-8009-4a55-a552-a949daedd33e] Running
	I0531 10:56:00.673419   10565 system_pods.go:89] "etcd-pause-20220531105516-2169" [3a6a2e78-715d-49cd-a9f5-e40eb5e9655b] Running
	I0531 10:56:00.673423   10565 system_pods.go:89] "kube-apiserver-pause-20220531105516-2169" [6373c815-4d5f-48d2-9704-f039ce864c7e] Running
	I0531 10:56:00.673426   10565 system_pods.go:89] "kube-controller-manager-pause-20220531105516-2169" [57b6bfd6-e106-4d90-b328-a1a319edcc23] Running
	I0531 10:56:00.673429   10565 system_pods.go:89] "kube-proxy-9dks8" [b02e6246-38d5-4d04-9c0d-3822fe8cd5eb] Running
	I0531 10:56:00.673433   10565 system_pods.go:89] "kube-scheduler-pause-20220531105516-2169" [5d579d6b-d751-4e1e-8f9c-f68dcda8cfe3] Running
	I0531 10:56:00.673437   10565 system_pods.go:89] "storage-provisioner" [e49a9253-f33c-408f-8cd4-1954e8b7b383] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 10:56:00.673442   10565 system_pods.go:126] duration metric: took 202.744446ms to wait for k8s-apps to be running ...
	I0531 10:56:00.673450   10565 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 10:56:00.673496   10565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:56:00.684227   10565 system_svc.go:56] duration metric: took 10.775878ms WaitForService to wait for kubelet.
	I0531 10:56:00.684239   10565 kubeadm.go:572] duration metric: took 1.610463112s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 10:56:00.684262   10565 node_conditions.go:102] verifying NodePressure condition ...
	I0531 10:56:00.871264   10565 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 10:56:00.871303   10565 node_conditions.go:123] node cpu capacity is 6
	I0531 10:56:00.871316   10565 node_conditions.go:105] duration metric: took 187.052649ms to run NodePressure ...
	I0531 10:56:00.871323   10565 start.go:213] waiting for startup goroutines ...
	I0531 10:56:00.900593   10565 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 10:56:00.923039   10565 out.go:177] * Done! kubectl is now configured to use "pause-20220531105516-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 17:55:23 UTC, end at Tue 2022-05-31 17:56:34 UTC. --
	May 31 17:55:25 pause-20220531105516-2169 dockerd[127]: time="2022-05-31T17:55:25.688889372Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 31 17:55:25 pause-20220531105516-2169 dockerd[127]: time="2022-05-31T17:55:25.689273185Z" level=info msg="Daemon shutdown complete"
	May 31 17:55:25 pause-20220531105516-2169 systemd[1]: docker.service: Succeeded.
	May 31 17:55:25 pause-20220531105516-2169 systemd[1]: Stopped Docker Application Container Engine.
	May 31 17:55:25 pause-20220531105516-2169 systemd[1]: Starting Docker Application Container Engine...
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.743447351Z" level=info msg="Starting up"
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.745137195Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.745169570Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.745186020Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.745193287Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.746291908Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.746325555Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.746339723Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.746347440Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.749128022Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.752797865Z" level=info msg="Loading containers: start."
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.827543361Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.858407539Z" level=info msg="Loading containers: done."
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.872822915Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.872879531Z" level=info msg="Daemon has completed initialization"
	May 31 17:55:25 pause-20220531105516-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.896968074Z" level=info msg="API listen on [::]:2376"
	May 31 17:55:25 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:25.899779714Z" level=info msg="API listen on /var/run/docker.sock"
	May 31 17:55:57 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:57.755697773Z" level=info msg="ignoring event" container=edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:55:57 pause-20220531105516-2169 dockerd[381]: time="2022-05-31T17:55:57.899389781Z" level=info msg="ignoring event" container=00dfd8cd64e1bf0b2b0881d7b429a9195a75f4020ea287c1450ac2e2abd95cb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	54da2cabd2b3   6e38f40d628d           "/storage-provisioner"   36 seconds ago       Up 36 seconds (Paused)                 k8s_storage-provisioner_storage-provisioner_kube-system_e49a9253-f33c-408f-8cd4-1954e8b7b383_0
	15687cccfccf   k8s.gcr.io/pause:3.6   "/pause"                 37 seconds ago       Up 36 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_e49a9253-f33c-408f-8cd4-1954e8b7b383_0
	7c12a9539459   a4ca41631cc7           "/coredns -conf /etc…"   45 seconds ago       Up 45 seconds (Paused)                 k8s_coredns_coredns-64897985d-4s59z_kube-system_6426fc67-8009-4a55-a552-a949daedd33e_0
	3053f1b74949   4c0375452406           "/usr/local/bin/kube…"   45 seconds ago       Up 45 seconds (Paused)                 k8s_kube-proxy_kube-proxy-9dks8_kube-system_b02e6246-38d5-4d04-9c0d-3822fe8cd5eb_0
	bb32501ef60d   k8s.gcr.io/pause:3.6   "/pause"                 45 seconds ago       Up 45 seconds (Paused)                 k8s_POD_kube-proxy-9dks8_kube-system_b02e6246-38d5-4d04-9c0d-3822fe8cd5eb_0
	1aeece019be1   k8s.gcr.io/pause:3.6   "/pause"                 45 seconds ago       Up 45 seconds (Paused)                 k8s_POD_coredns-64897985d-4s59z_kube-system_6426fc67-8009-4a55-a552-a949daedd33e_0
	00dfd8cd64e1   k8s.gcr.io/pause:3.6   "/pause"                 45 seconds ago       Exited (0) 39 seconds ago              k8s_POD_coredns-64897985d-h5n2c_kube-system_4642d781-1424-4ca7-9ad6-f11cdc7ddabb_0
	688de1064f5f   df7b72818ad2           "kube-controller-man…"   About a minute ago   Up About a minute (Paused)             k8s_kube-controller-manager_kube-controller-manager-pause-20220531105516-2169_kube-system_7649cc859e46fddd56b6521f620b5c2c_0
	3c4d15e64cdd   8fa62c12256d           "kube-apiserver --ad…"   About a minute ago   Up About a minute (Paused)             k8s_kube-apiserver_kube-apiserver-pause-20220531105516-2169_kube-system_929237bc0968671f3023ebc03f9f4902_0
	ba71dc50c832   25f8c7f3da61           "etcd --advertise-cl…"   About a minute ago   Up About a minute (Paused)             k8s_etcd_etcd-pause-20220531105516-2169_kube-system_1ba4ff814f40df686b7ed04dc5b1c6c7_0
	db8e3f0ada48   595f327f224a           "kube-scheduler --au…"   About a minute ago   Up About a minute (Paused)             k8s_kube-scheduler_kube-scheduler-pause-20220531105516-2169_kube-system_768e34c80e277db0847afdf5bec0abcb_0
	a96ab942e309   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-controller-manager-pause-20220531105516-2169_kube-system_7649cc859e46fddd56b6521f620b5c2c_0
	69d80b983b15   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-apiserver-pause-20220531105516-2169_kube-system_929237bc0968671f3023ebc03f9f4902_0
	e4cf6ef2503c   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_etcd-pause-20220531105516-2169_kube-system_1ba4ff814f40df686b7ed04dc5b1c6c7_0
	a4da04f94bcb   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-scheduler-pause-20220531105516-2169_kube-system_768e34c80e277db0847afdf5bec0abcb_0
	time="2022-05-31T17:56:36Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> coredns [7c12a9539459] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001413] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001093] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001737] FS-Cache: N-cookie d=0000000038acf5de n=000000008809a18b
	[  +0.001435] FS-Cache: N-key=[8] '751ad70200000000'
	[  +0.001928] FS-Cache: Duplicate cookie detected
	[  +0.001010] FS-Cache: O-cookie c=000000002a5eed4b [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001783] FS-Cache: O-cookie d=0000000038acf5de n=000000006a3a9612
	[  +0.001418] FS-Cache: O-key=[8] '751ad70200000000'
	[  +0.001104] FS-Cache: N-cookie c=000000004f5de6c9 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001740] FS-Cache: N-cookie d=0000000038acf5de n=000000002ffefb64
	[  +0.001430] FS-Cache: N-key=[8] '751ad70200000000'
	[  +3.329767] FS-Cache: Duplicate cookie detected
	[  +0.001037] FS-Cache: O-cookie c=00000000b56bf5b4 [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001856] FS-Cache: O-cookie d=0000000038acf5de n=00000000b91e189d
	[  +0.001481] FS-Cache: O-key=[8] '741ad70200000000'
	[  +0.001123] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001784] FS-Cache: N-cookie d=0000000038acf5de n=00000000eccdb4bc
	[  +0.001461] FS-Cache: N-key=[8] '741ad70200000000'
	[  +0.431860] FS-Cache: Duplicate cookie detected
	[  +0.001026] FS-Cache: O-cookie c=000000004a859abe [p=00000000a0b6b306 fl=226 nc=0 na=1]
	[  +0.001835] FS-Cache: O-cookie d=0000000038acf5de n=00000000e6b4c68e
	[  +0.001495] FS-Cache: O-key=[8] '811ad70200000000'
	[  +0.001101] FS-Cache: N-cookie c=000000002d550120 [p=00000000a0b6b306 fl=2 nc=0 na=1]
	[  +0.001734] FS-Cache: N-cookie d=0000000038acf5de n=00000000648703d1
	[  +0.001443] FS-Cache: N-key=[8] '811ad70200000000'
	
	* 
	* ==> etcd [ba71dc50c832] <==
	* {"level":"info","ts":"2022-05-31T17:55:32.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-05-31T17:55:32.692Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-05-31T17:55:32.694Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:55:32.694Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:55:32.694Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:55:32.694Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:55:32.694Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:32.987Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:32.988Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:pause-20220531105516-2169 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:55:32.989Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T17:55:32.989Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:56:47 up 44 min,  0 users,  load average: 0.71, 1.01, 0.79
	Linux pause-20220531105516-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3c4d15e64cdd] <==
	* I0531 17:55:35.103495       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:55:35.103844       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:55:35.105252       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:55:35.105262       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:55:35.109341       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:55:35.123123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:55:36.002959       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:55:36.003024       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:55:36.008220       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:55:36.011135       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:55:36.011164       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:55:36.298951       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:55:36.328037       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:55:36.449930       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:55:36.453640       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 17:55:36.454325       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:55:36.456999       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:55:37.137570       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:55:38.035540       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:55:38.041911       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:55:38.050509       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:55:38.209464       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:55:50.662095       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:55:50.789882       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:55:51.514247       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [688de1064f5f] <==
	* W0531 17:55:50.665400       1 node_lifecycle_controller.go:1012] Missing timestamp for Node pause-20220531105516-2169. Assuming now as a timestamp.
	I0531 17:55:50.665416       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 17:55:50.665542       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 17:55:50.665770       1 event.go:294] "Event occurred" object="pause-20220531105516-2169" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220531105516-2169 event: Registered Node pause-20220531105516-2169 in Controller"
	I0531 17:55:50.666635       1 range_allocator.go:374] Set node pause-20220531105516-2169 PodCIDR to [10.244.0.0/24]
	I0531 17:55:50.666810       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0531 17:55:50.668299       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:55:50.668441       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 17:55:50.679120       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:55:50.687998       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-h5n2c"
	I0531 17:55:50.695612       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4s59z"
	I0531 17:55:50.763289       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:55:50.767812       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:55:50.767858       1 disruption.go:371] Sending events to api server.
	I0531 17:55:50.865506       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9dks8"
	I0531 17:55:50.881287       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.940904       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:55:50.962270       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.964290       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:55:50.965283       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:55:51.117049       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:55:51.120522       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-h5n2c"
	I0531 17:55:51.295648       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.364426       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.364480       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [3053f1b74949] <==
	* I0531 17:55:51.489484       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:55:51.489542       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:55:51.489563       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:55:51.508418       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:55:51.508501       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:55:51.508510       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:55:51.508523       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:55:51.508743       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:55:51.509277       1 config.go:317] "Starting service config controller"
	I0531 17:55:51.509307       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:55:51.509348       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:55:51.509351       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:55:51.609581       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:55:51.609632       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [db8e3f0ada48] <==
	* W0531 17:55:35.042396       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:55:35.042428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:55:35.042444       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:55:35.042459       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:55:35.042522       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:35.042534       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:55:35.042689       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:55:35.042702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:55:35.042789       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:55:35.042841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:55:36.007729       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:36.007796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:55:36.044036       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:55:36.044078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:55:36.049928       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:55:36.049974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:55:36.055254       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:55:36.055291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:55:36.085360       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:36.085378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:55:36.143443       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:36.143489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:36.500951       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 17:55:37.088237       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 17:55:38.335907       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:55:23 UTC, end at Tue 2022-05-31 17:56:48 UTC. --
	May 31 17:55:51 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:51.700745    1766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1aeece019be1aeb04fe3238c88015b8b2e8fc213fb740fe778aaa79ab1a650bc"
	May 31 17:55:51 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:51.701658    1766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-4s59z through plugin: invalid network status for"
	May 31 17:55:51 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:51.703901    1766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-h5n2c through plugin: invalid network status for"
	May 31 17:55:51 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:51.708789    1766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="00dfd8cd64e1bf0b2b0881d7b429a9195a75f4020ea287c1450ac2e2abd95cb1"
	May 31 17:55:52 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:52.715606    1766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-4s59z through plugin: invalid network status for"
	May 31 17:55:52 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:52.719058    1766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-h5n2c through plugin: invalid network status for"
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.118207    1766 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-config-volume\") pod \"4642d781-1424-4ca7-9ad6-f11cdc7ddabb\" (UID: \"4642d781-1424-4ca7-9ad6-f11cdc7ddabb\") "
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.118280    1766 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx7dj\" (UniqueName: \"kubernetes.io/projected/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-kube-api-access-fx7dj\") pod \"4642d781-1424-4ca7-9ad6-f11cdc7ddabb\" (UID: \"4642d781-1424-4ca7-9ad6-f11cdc7ddabb\") "
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: W0531 17:55:58.118391    1766 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/4642d781-1424-4ca7-9ad6-f11cdc7ddabb/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.118514    1766 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-config-volume" (OuterVolumeSpecName: "config-volume") pod "4642d781-1424-4ca7-9ad6-f11cdc7ddabb" (UID: "4642d781-1424-4ca7-9ad6-f11cdc7ddabb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.119989    1766 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-kube-api-access-fx7dj" (OuterVolumeSpecName: "kube-api-access-fx7dj") pod "4642d781-1424-4ca7-9ad6-f11cdc7ddabb" (UID: "4642d781-1424-4ca7-9ad6-f11cdc7ddabb"). InnerVolumeSpecName "kube-api-access-fx7dj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.218731    1766 reconciler.go:300] "Volume detached for volume \"kube-api-access-fx7dj\" (UniqueName: \"kubernetes.io/projected/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-kube-api-access-fx7dj\") on node \"pause-20220531105516-2169\" DevicePath \"\""
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.218793    1766 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4642d781-1424-4ca7-9ad6-f11cdc7ddabb-config-volume\") on node \"pause-20220531105516-2169\" DevicePath \"\""
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.752391    1766 scope.go:110] "RemoveContainer" containerID="edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9"
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.768387    1766 scope.go:110] "RemoveContainer" containerID="edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9"
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: E0531 17:55:58.772923    1766 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9" containerID="edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9"
	May 31 17:55:58 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:58.773006    1766 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9} err="failed to get container status \"edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9\": rpc error: code = Unknown desc = Error: No such container: edd70cea55ad1dbf0536024c7c0f310187eb84f8c3f04ea328b5737328785da9"
	May 31 17:55:59 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:59.600880    1766 topology_manager.go:200] "Topology Admit Handler"
	May 31 17:55:59 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:59.626750    1766 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbmk\" (UniqueName: \"kubernetes.io/projected/e49a9253-f33c-408f-8cd4-1954e8b7b383-kube-api-access-xgbmk\") pod \"storage-provisioner\" (UID: \"e49a9253-f33c-408f-8cd4-1954e8b7b383\") " pod="kube-system/storage-provisioner"
	May 31 17:55:59 pause-20220531105516-2169 kubelet[1766]: I0531 17:55:59.626805    1766 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e49a9253-f33c-408f-8cd4-1954e8b7b383-tmp\") pod \"storage-provisioner\" (UID: \"e49a9253-f33c-408f-8cd4-1954e8b7b383\") " pod="kube-system/storage-provisioner"
	May 31 17:56:00 pause-20220531105516-2169 kubelet[1766]: I0531 17:56:00.292929    1766 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4642d781-1424-4ca7-9ad6-f11cdc7ddabb path="/var/lib/kubelet/pods/4642d781-1424-4ca7-9ad6-f11cdc7ddabb/volumes"
	May 31 17:56:01 pause-20220531105516-2169 kubelet[1766]: I0531 17:56:01.501278    1766 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	May 31 17:56:01 pause-20220531105516-2169 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 31 17:56:01 pause-20220531105516-2169 systemd[1]: kubelet.service: Succeeded.
	May 31 17:56:01 pause-20220531105516-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [54da2cabd2b3] <==
	* I0531 17:56:00.095186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 17:56:00.104379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 17:56:00.104427       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 17:56:00.115202       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 17:56:00.115333       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"faf249a5-3f76-4a18-9535-64a1d47cb44d", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220531105516-2169_4bcc27f3-1f97-49cc-8d10-853d30381b92 became leader
	I0531 17:56:00.115366       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220531105516-2169_4bcc27f3-1f97-49cc-8d10-853d30381b92!
	I0531 17:56:00.216019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220531105516-2169_4bcc27f3-1f97-49cc-8d10-853d30381b92!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:56:47.215959   10663 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220531105516-2169 -n pause-20220531105516-2169
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220531105516-2169 -n pause-20220531105516-2169: exit status 2 (16.100772259s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220531105516-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0531 11:03:03.061973    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.237221656s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 11:02:41.335616   12612 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:02:41.335832   12612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:02:41.335838   12612 out.go:309] Setting ErrFile to fd 2...
	I0531 11:02:41.335842   12612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:02:41.335964   12612 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:02:41.336306   12612 out.go:303] Setting JSON to false
	I0531 11:02:41.352245   12612 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3730,"bootTime":1654016431,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:02:41.352347   12612 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:02:41.374532   12612 out.go:177] * [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:02:41.417708   12612 notify.go:193] Checking for updates...
	I0531 11:02:41.417734   12612 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:02:41.439790   12612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:02:41.467989   12612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:02:41.525322   12612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:02:41.569141   12612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:02:41.590757   12612 config.go:178] Loaded profile config "kubenet-20220531104925-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:02:41.590836   12612 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:02:41.664511   12612 docker.go:137] docker version: linux-20.10.14
	I0531 11:02:41.664641   12612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:02:41.789681   12612 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:02:41.738966422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:02:41.811590   12612 out.go:177] * Using the docker driver based on user configuration
	I0531 11:02:41.832097   12612 start.go:284] selected driver: docker
	I0531 11:02:41.832111   12612 start.go:806] validating driver "docker" against <nil>
	I0531 11:02:41.832128   12612 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:02:41.834439   12612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:02:41.959626   12612 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:02:41.909329623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:02:41.959762   12612 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 11:02:41.959912   12612 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:02:41.981195   12612 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 11:02:42.002270   12612 cni.go:95] Creating CNI manager for ""
	I0531 11:02:42.002338   12612 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:02:42.002367   12612 start_flags.go:306] config:
	{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:02:42.024150   12612 out.go:177] * Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	I0531 11:02:42.066292   12612 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:02:42.087107   12612 out.go:177] * Pulling base image ...
	I0531 11:02:42.129317   12612 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:02:42.129320   12612 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:02:42.129447   12612 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 11:02:42.129464   12612 cache.go:57] Caching tarball of preloaded images
	I0531 11:02:42.129661   12612 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:02:42.129678   12612 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 11:02:42.130632   12612 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:02:42.130728   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json: {Name:mkd054687a1a27ec23fea8e8d48464df5a66a839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:42.192698   12612 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:02:42.192717   12612 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:02:42.192728   12612 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:02:42.192763   12612 start.go:352] acquiring machines lock for old-k8s-version-20220531110241-2169: {Name:mkde0b1c8a03f8862b5675925132e687b92ccd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:02:42.192894   12612 start.go:356] acquired machines lock for "old-k8s-version-20220531110241-2169" in 121.139µs
	I0531 11:02:42.192921   12612 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:02:42.192972   12612 start.go:131] createHost starting for "" (driver="docker")
	I0531 11:02:42.214778   12612 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 11:02:42.215023   12612 start.go:165] libmachine.API.Create for "old-k8s-version-20220531110241-2169" (driver="docker")
	I0531 11:02:42.215061   12612 client.go:168] LocalClient.Create starting
	I0531 11:02:42.215179   12612 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 11:02:42.215224   12612 main.go:134] libmachine: Decoding PEM data...
	I0531 11:02:42.215242   12612 main.go:134] libmachine: Parsing certificate...
	I0531 11:02:42.215322   12612 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 11:02:42.215368   12612 main.go:134] libmachine: Decoding PEM data...
	I0531 11:02:42.215403   12612 main.go:134] libmachine: Parsing certificate...
	I0531 11:02:42.215994   12612 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220531110241-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 11:02:42.277620   12612 cli_runner.go:211] docker network inspect old-k8s-version-20220531110241-2169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 11:02:42.277712   12612 network_create.go:272] running [docker network inspect old-k8s-version-20220531110241-2169] to gather additional debugging logs...
	I0531 11:02:42.277729   12612 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220531110241-2169
	W0531 11:02:42.339495   12612 cli_runner.go:211] docker network inspect old-k8s-version-20220531110241-2169 returned with exit code 1
	I0531 11:02:42.339521   12612 network_create.go:275] error running [docker network inspect old-k8s-version-20220531110241-2169]: docker network inspect old-k8s-version-20220531110241-2169: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220531110241-2169
	I0531 11:02:42.339536   12612 network_create.go:277] output of [docker network inspect old-k8s-version-20220531110241-2169]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220531110241-2169
	
	** /stderr **
	I0531 11:02:42.339615   12612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 11:02:42.401739   12612 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000552600] misses:0}
	I0531 11:02:42.401776   12612 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 11:02:42.401793   12612 network_create.go:115] attempt to create docker network old-k8s-version-20220531110241-2169 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 11:02:42.401850   12612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220531110241-2169
	I0531 11:02:42.494449   12612 network_create.go:99] docker network old-k8s-version-20220531110241-2169 192.168.49.0/24 created
	I0531 11:02:42.494497   12612 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20220531110241-2169" container
	I0531 11:02:42.494599   12612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 11:02:42.559534   12612 cli_runner.go:164] Run: docker volume create old-k8s-version-20220531110241-2169 --label name.minikube.sigs.k8s.io=old-k8s-version-20220531110241-2169 --label created_by.minikube.sigs.k8s.io=true
	I0531 11:02:42.681977   12612 oci.go:103] Successfully created a docker volume old-k8s-version-20220531110241-2169
	I0531 11:02:42.682103   12612 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220531110241-2169-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220531110241-2169 --entrypoint /usr/bin/test -v old-k8s-version-20220531110241-2169:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 11:02:43.144350   12612 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220531110241-2169
	I0531 11:02:43.144387   12612 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:02:43.144400   12612 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 11:02:43.144495   12612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220531110241-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 11:02:47.177161   12612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220531110241-2169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (4.03264661s)
	I0531 11:02:47.177185   12612 kic.go:188] duration metric: took 4.032833 seconds to extract preloaded images to volume
	I0531 11:02:47.177280   12612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 11:02:47.301647   12612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220531110241-2169 --name old-k8s-version-20220531110241-2169 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220531110241-2169 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220531110241-2169 --network old-k8s-version-20220531110241-2169 --ip 192.168.49.2 --volume old-k8s-version-20220531110241-2169:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 11:02:47.684874   12612 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Running}}
	I0531 11:02:47.756280   12612 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:02:47.831241   12612 cli_runner.go:164] Run: docker exec old-k8s-version-20220531110241-2169 stat /var/lib/dpkg/alternatives/iptables
	I0531 11:02:47.963919   12612 oci.go:247] the created container "old-k8s-version-20220531110241-2169" has a running status.
	I0531 11:02:47.963946   12612 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa...
	I0531 11:02:48.225946   12612 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 11:02:48.337666   12612 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:02:48.407540   12612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 11:02:48.407559   12612 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220531110241-2169 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 11:02:48.529432   12612 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:02:48.598130   12612 machine.go:88] provisioning docker machine ...
	I0531 11:02:48.598170   12612 ubuntu.go:169] provisioning hostname "old-k8s-version-20220531110241-2169"
	I0531 11:02:48.598253   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:48.668461   12612 main.go:134] libmachine: Using SSH client type: native
	I0531 11:02:48.668638   12612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51147 <nil> <nil>}
	I0531 11:02:48.668666   12612 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220531110241-2169 && echo "old-k8s-version-20220531110241-2169" | sudo tee /etc/hostname
	I0531 11:02:48.789150   12612 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220531110241-2169
	
	I0531 11:02:48.789268   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:48.859652   12612 main.go:134] libmachine: Using SSH client type: native
	I0531 11:02:48.859794   12612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51147 <nil> <nil>}
	I0531 11:02:48.859808   12612 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220531110241-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220531110241-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220531110241-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:02:48.972774   12612 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:02:48.972800   12612 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:02:48.972825   12612 ubuntu.go:177] setting up certificates
	I0531 11:02:48.972837   12612 provision.go:83] configureAuth start
	I0531 11:02:48.972902   12612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:02:49.042183   12612 provision.go:138] copyHostCerts
	I0531 11:02:49.042267   12612 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:02:49.042275   12612 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:02:49.042370   12612 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:02:49.042604   12612 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:02:49.042613   12612 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:02:49.042675   12612 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:02:49.042803   12612 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:02:49.042809   12612 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:02:49.042870   12612 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:02:49.042976   12612 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220531110241-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220531110241-2169]
	I0531 11:02:49.286207   12612 provision.go:172] copyRemoteCerts
	I0531 11:02:49.286270   12612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:02:49.286311   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:49.355858   12612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51147 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:02:49.439794   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:02:49.457258   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0531 11:02:49.474307   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:02:49.491666   12612 provision.go:86] duration metric: configureAuth took 518.822987ms
	I0531 11:02:49.491679   12612 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:02:49.491818   12612 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:02:49.491876   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:49.561413   12612 main.go:134] libmachine: Using SSH client type: native
	I0531 11:02:49.561582   12612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51147 <nil> <nil>}
	I0531 11:02:49.561599   12612 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:02:49.676588   12612 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:02:49.676600   12612 ubuntu.go:71] root file system type: overlay
	I0531 11:02:49.676755   12612 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:02:49.676826   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:49.746518   12612 main.go:134] libmachine: Using SSH client type: native
	I0531 11:02:49.746674   12612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51147 <nil> <nil>}
	I0531 11:02:49.746722   12612 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:02:49.866818   12612 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:02:49.866910   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:49.937200   12612 main.go:134] libmachine: Using SSH client type: native
	I0531 11:02:49.937348   12612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51147 <nil> <nil>}
	I0531 11:02:49.937361   12612 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:02:50.542855   12612 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 18:02:49.866158878 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 11:02:50.542886   12612 machine.go:91] provisioned docker machine in 1.944759793s
	I0531 11:02:50.542894   12612 client.go:171] LocalClient.Create took 8.327928698s
	I0531 11:02:50.542923   12612 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220531110241-2169" took 8.327991645s
	I0531 11:02:50.542967   12612 start.go:306] post-start starting for "old-k8s-version-20220531110241-2169" (driver="docker")
	I0531 11:02:50.542978   12612 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:02:50.543082   12612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:02:50.543185   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:50.613221   12612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51147 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:02:50.698581   12612 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:02:50.702137   12612 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:02:50.702151   12612 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:02:50.702158   12612 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:02:50.702165   12612 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:02:50.702177   12612 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:02:50.702287   12612 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:02:50.702427   12612 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:02:50.702577   12612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:02:50.709643   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:02:50.726676   12612 start.go:309] post-start completed in 183.696787ms
	I0531 11:02:50.727199   12612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:02:50.796165   12612 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:02:50.796547   12612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:02:50.796596   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:50.866618   12612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51147 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:02:50.951876   12612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:02:50.956558   12612 start.go:134] duration metric: createHost completed in 8.763684908s
	I0531 11:02:50.956577   12612 start.go:81] releasing machines lock for "old-k8s-version-20220531110241-2169", held for 8.763779519s
	I0531 11:02:50.956654   12612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:02:51.025736   12612 ssh_runner.go:195] Run: systemctl --version
	I0531 11:02:51.025741   12612 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:02:51.025805   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:51.025811   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:51.102166   12612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51147 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:02:51.103999   12612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51147 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:02:51.310697   12612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:02:51.320181   12612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:02:51.331138   12612 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:02:51.331185   12612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:02:51.341360   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:02:51.371147   12612 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:02:51.447136   12612 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:02:51.522066   12612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:02:51.532228   12612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:02:51.597859   12612 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:02:51.607630   12612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:02:51.643276   12612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:02:51.720373   12612 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0531 11:02:51.720479   12612 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220531110241-2169 dig +short host.docker.internal
	I0531 11:02:51.855586   12612 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:02:51.855791   12612 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:02:51.860110   12612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:02:51.873865   12612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:02:51.948455   12612 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:02:51.948527   12612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:02:51.993068   12612 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:02:51.993103   12612 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:02:51.993196   12612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:02:52.023478   12612 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:02:52.023495   12612 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:02:52.023574   12612 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:02:52.119702   12612 cni.go:95] Creating CNI manager for ""
	I0531 11:02:52.119714   12612 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:02:52.119726   12612 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:02:52.119743   12612 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220531110241-2169 NodeName:old-k8s-version-20220531110241-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:02:52.119848   12612 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220531110241-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220531110241-2169
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:02:52.119921   12612 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220531110241-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:02:52.119981   12612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0531 11:02:52.128071   12612 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:02:52.128128   12612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:02:52.135614   12612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0531 11:02:52.148501   12612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:02:52.165917   12612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0531 11:02:52.181680   12612 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:02:52.186734   12612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:02:52.198584   12612 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169 for IP: 192.168.49.2
	I0531 11:02:52.198712   12612 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:02:52.198764   12612 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:02:52.198818   12612 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key
	I0531 11:02:52.198832   12612 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.crt with IP's: []
	I0531 11:02:52.323334   12612 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.crt ...
	I0531 11:02:52.323348   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.crt: {Name:mkf76d740cb6c1fbb557b7d44bf1bb3b3fc5f5c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.323637   12612 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key ...
	I0531 11:02:52.323644   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key: {Name:mkc40b89d1b79f932dafed5cee226a10b353316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.323837   12612 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2
	I0531 11:02:52.323852   12612 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 11:02:52.464390   12612 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt.dd3b5fb2 ...
	I0531 11:02:52.464408   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt.dd3b5fb2: {Name:mk7a9351e11a1993199e3e95ffef82b61e120a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.464695   12612 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2 ...
	I0531 11:02:52.464703   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2: {Name:mke781dccb0182290de9497261560a06c33c276b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.464917   12612 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt
	I0531 11:02:52.465118   12612 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key
	I0531 11:02:52.465277   12612 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key
	I0531 11:02:52.465293   12612 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt with IP's: []
	I0531 11:02:52.708781   12612 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt ...
	I0531 11:02:52.708798   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt: {Name:mk9a19cf355581ed14decc0f50c147c33602a1c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.709101   12612 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key ...
	I0531 11:02:52.709110   12612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key: {Name:mk763154eac24ed3d9739ca7bacd150fe6c4cd2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:02:52.709520   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:02:52.709564   12612 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:02:52.709576   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:02:52.709605   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:02:52.709633   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:02:52.709674   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:02:52.709758   12612 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:02:52.710242   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:02:52.731318   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:02:52.752001   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:02:52.776332   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:02:52.797776   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:02:52.816828   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:02:52.836265   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:02:52.858183   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:02:52.878406   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:02:52.898491   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:02:52.917563   12612 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:02:52.937690   12612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:02:52.951159   12612 ssh_runner.go:195] Run: openssl version
	I0531 11:02:52.957508   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:02:52.967752   12612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:02:52.972853   12612 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:02:52.972946   12612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:02:52.981615   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:02:52.996629   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:02:53.008970   12612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:02:53.015857   12612 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:02:53.015938   12612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:02:53.024183   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:02:53.037587   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:02:53.052508   12612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:02:53.059655   12612 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:02:53.059732   12612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:02:53.068940   12612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:02:53.078716   12612 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false}
	I0531 11:02:53.078824   12612 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:02:53.107800   12612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:02:53.119580   12612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:02:53.130429   12612 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:02:53.130499   12612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:02:53.146402   12612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:02:53.146449   12612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:04:51.945034   12612 out.go:204]   - Generating certificates and keys ...
	I0531 11:04:51.987482   12612 out.go:204]   - Booting up control plane ...
	W0531 11:04:51.990853   12612 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220531110241-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220531110241-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220531110241-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220531110241-2169 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 11:04:51.990896   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:04:52.416300   12612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:04:52.425558   12612 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:04:52.425607   12612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:04:52.432569   12612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:04:52.432602   12612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:04:53.147480   12612 out.go:204]   - Generating certificates and keys ...
	I0531 11:04:53.958986   12612 out.go:204]   - Booting up control plane ...
	I0531 11:06:48.872941   12612 kubeadm.go:397] StartCluster complete in 3m55.797092049s
	I0531 11:06:48.873016   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:06:48.903130   12612 logs.go:274] 0 containers: []
	W0531 11:06:48.903144   12612 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:06:48.903203   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:06:48.932003   12612 logs.go:274] 0 containers: []
	W0531 11:06:48.932014   12612 logs.go:276] No container was found matching "etcd"
	I0531 11:06:48.932067   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:06:48.961748   12612 logs.go:274] 0 containers: []
	W0531 11:06:48.961762   12612 logs.go:276] No container was found matching "coredns"
	I0531 11:06:48.961832   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:06:48.991608   12612 logs.go:274] 0 containers: []
	W0531 11:06:48.991621   12612 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:06:48.991673   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:06:49.020247   12612 logs.go:274] 0 containers: []
	W0531 11:06:49.020259   12612 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:06:49.020313   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:06:49.050668   12612 logs.go:274] 0 containers: []
	W0531 11:06:49.050685   12612 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:06:49.050741   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:06:49.079763   12612 logs.go:274] 0 containers: []
	W0531 11:06:49.079775   12612 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:06:49.079830   12612 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:06:49.108894   12612 logs.go:274] 0 containers: []
	W0531 11:06:49.108907   12612 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:06:49.108914   12612 logs.go:123] Gathering logs for kubelet ...
	I0531 11:06:49.108922   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:06:49.149288   12612 logs.go:123] Gathering logs for dmesg ...
	I0531 11:06:49.149301   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:06:49.162539   12612 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:06:49.162552   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:06:49.215025   12612 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:06:49.215040   12612 logs.go:123] Gathering logs for Docker ...
	I0531 11:06:49.215048   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:06:49.229022   12612 logs.go:123] Gathering logs for container status ...
	I0531 11:06:49.229034   12612 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:06:51.286145   12612 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057124059s)
	W0531 11:06:51.286258   12612 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 11:06:51.286272   12612 out.go:239] * 
	* 
	W0531 11:06:51.286398   12612 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:06:51.286411   12612 out.go:239] * 
	* 
	W0531 11:06:51.286919   12612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 11:06:51.349651   12612 out.go:177] 
	W0531 11:06:51.392004   12612 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:06:51.412673   12612 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 11:06:51.412769   12612 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 11:06:51.454927   12612 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:02:47.697577313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99c60f27f5ff42fbe095a3999df8b77dd1e171cb46c0a1c5e1b9ff11c0670e9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51151"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99c60f27f5ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "8d86a29510e832262aa35fd78bc0185e767442c6f9338fb511f6bac2eb09d8f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 6 (436.070132ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:06:52.049951   13014 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220531110241-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220531110241-2169 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220531110241-2169 create -f testdata/busybox.yaml: exit status 1 (29.636371ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220531110241-2169" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220531110241-2169 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:02:47.697577313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99c60f27f5ff42fbe095a3999df8b77dd1e171cb46c0a1c5e1b9ff11c0670e9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51151"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99c60f27f5ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "8d86a29510e832262aa35fd78bc0185e767442c6f9338fb511f6bac2eb09d8f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 6 (463.332554ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:06:52.594172   13027 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220531110241-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:02:47.697577313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99c60f27f5ff42fbe095a3999df8b77dd1e171cb46c0a1c5e1b9ff11c0670e9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51151"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99c60f27f5ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "8d86a29510e832262aa35fd78bc0185e767442c6f9338fb511f6bac2eb09d8f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 6 (436.707255ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:06:53.136820   13039 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220531110241-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220531110241-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0531 11:06:54.464843    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:57.119059    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.124325    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.136414    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.156945    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.197214    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.277350    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.437617    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:57.759840    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:58.400333    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:59.680799    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:07:02.241593    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:07:07.362573    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:07:14.945379    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:17.602652    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:07:27.922938    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:27.928805    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:27.939908    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:27.960247    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:28.002421    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:28.082642    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:28.243311    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:28.563587    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:29.203749    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:30.483886    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:33.045536    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:38.156765    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:07:38.166106    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:48.406318    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:55.906876    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:07:58.529549    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:08:03.057735    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
E0531 11:08:08.886811    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:12.280050    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:08:19.118546    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220531110241-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.127375276s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220531110241-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220531110241-2169 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220531110241-2169 describe deploy/metrics-server -n kube-system: exit status 1 (30.234005ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220531110241-2169" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220531110241-2169 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:02:47.697577313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99c60f27f5ff42fbe095a3999df8b77dd1e171cb46c0a1c5e1b9ff11c0670e9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51151"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99c60f27f5ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "8d86a29510e832262aa35fd78bc0185e767442c6f9338fb511f6bac2eb09d8f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 6 (487.918286ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:08:22.854387   13070 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220531110241-2169" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (493.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0531 11:08:35.736501    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:35.741602    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:35.751685    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:35.771865    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:35.812134    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:35.892538    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:36.054707    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:36.375410    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:37.017681    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:38.298285    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:40.859069    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:45.979168    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:46.311265    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:49.847332    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:51.575853    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 11:08:56.219235    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:08:57.924348    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:09:00.030225    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 11:09:08.519109    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 11:09:14.065183    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:09:16.699207    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:09:17.828232    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:09:25.602974    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:09:41.037873    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:09:57.659007    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:10:11.767291    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
E0531 11:10:14.682668    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:10:23.141285    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 11:10:28.432203    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:10:42.367789    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:10:56.118224    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m8.847336103s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220531110241-2169" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 11:08:24.864423   13098 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:08:24.864582   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864588   13098 out.go:309] Setting ErrFile to fd 2...
	I0531 11:08:24.864592   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864692   13098 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:08:24.864985   13098 out.go:303] Setting JSON to false
	I0531 11:08:24.879863   13098 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4073,"bootTime":1654016431,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:08:24.879988   13098 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:08:24.902035   13098 out.go:177] * [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:08:24.945014   13098 notify.go:193] Checking for updates...
	I0531 11:08:24.966436   13098 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:08:24.987830   13098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:25.009108   13098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:08:25.030802   13098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:08:25.052040   13098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:08:25.074525   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:25.096505   13098 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0531 11:08:25.117749   13098 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:08:25.191594   13098 docker.go:137] docker version: linux-20.10.14
	I0531 11:08:25.191723   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.317616   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.254314323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.361153   13098 out.go:177] * Using the docker driver based on existing profile
	I0531 11:08:25.382343   13098 start.go:284] selected driver: docker
	I0531 11:08:25.382377   13098 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.382520   13098 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:08:25.385966   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.513518   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.450743823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.513684   13098 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:08:25.513704   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:25.513712   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:25.513726   13098 start_flags.go:306] config:
	{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.535504   13098 out.go:177] * Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	I0531 11:08:25.561264   13098 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:08:25.582190   13098 out.go:177] * Pulling base image ...
	I0531 11:08:25.624028   13098 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:08:25.624035   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:25.624091   13098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 11:08:25.624108   13098 cache.go:57] Caching tarball of preloaded images
	I0531 11:08:25.624296   13098 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:08:25.624329   13098 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 11:08:25.625057   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:25.688021   13098 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:08:25.688038   13098 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:08:25.688049   13098 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:08:25.688095   13098 start.go:352] acquiring machines lock for old-k8s-version-20220531110241-2169: {Name:mkde0b1c8a03f8862b5675925132e687b92ccd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:08:25.688173   13098 start.go:356] acquired machines lock for "old-k8s-version-20220531110241-2169" in 55.993µs
	I0531 11:08:25.688192   13098 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:08:25.688224   13098 fix.go:55] fixHost starting: 
	I0531 11:08:25.688466   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:25.755111   13098 fix.go:103] recreateIfNeeded on old-k8s-version-20220531110241-2169: state=Stopped err=<nil>
	W0531 11:08:25.755155   13098 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:08:25.797678   13098 out.go:177] * Restarting existing docker container for "old-k8s-version-20220531110241-2169" ...
	I0531 11:08:25.818473   13098 cli_runner.go:164] Run: docker start old-k8s-version-20220531110241-2169
	I0531 11:08:26.192165   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:26.263698   13098 kic.go:416] container "old-k8s-version-20220531110241-2169" state is running.
	I0531 11:08:26.264351   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.337917   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:26.338311   13098 machine.go:88] provisioning docker machine ...
	I0531 11:08:26.338340   13098 ubuntu.go:169] provisioning hostname "old-k8s-version-20220531110241-2169"
	I0531 11:08:26.338453   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.410821   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.411035   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.411048   13098 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220531110241-2169 && echo "old-k8s-version-20220531110241-2169" | sudo tee /etc/hostname
	I0531 11:08:26.530934   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220531110241-2169
	
	I0531 11:08:26.531026   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.602777   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.602942   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.602957   13098 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220531110241-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220531110241-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220531110241-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:08:26.716578   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:26.716599   13098 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:08:26.716617   13098 ubuntu.go:177] setting up certificates
	I0531 11:08:26.716625   13098 provision.go:83] configureAuth start
	I0531 11:08:26.716695   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.787003   13098 provision.go:138] copyHostCerts
	I0531 11:08:26.787080   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:08:26.787096   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:08:26.787190   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:08:26.787413   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:08:26.787423   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:08:26.787482   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:08:26.787625   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:08:26.787631   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:08:26.787687   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:08:26.787803   13098 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220531110241-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220531110241-2169]
	I0531 11:08:26.886368   13098 provision.go:172] copyRemoteCerts
	I0531 11:08:26.886424   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:08:26.886475   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.957750   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.039830   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0531 11:08:27.059791   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:08:27.076499   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:08:27.095218   13098 provision.go:86] duration metric: configureAuth took 378.579892ms
	I0531 11:08:27.095231   13098 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:08:27.095385   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:27.095451   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.165741   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.165895   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.165906   13098 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:08:27.275339   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:08:27.275354   13098 ubuntu.go:71] root file system type: overlay
	I0531 11:08:27.275532   13098 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:08:27.275598   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.345524   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.345724   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.345774   13098 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:08:27.466818   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:08:27.466905   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.537313   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.537482   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.537496   13098 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:08:27.652716   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:27.652730   13098 machine.go:91] provisioned docker machine in 1.314427116s
	I0531 11:08:27.652737   13098 start.go:306] post-start starting for "old-k8s-version-20220531110241-2169" (driver="docker")
	I0531 11:08:27.652741   13098 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:08:27.652808   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:08:27.652850   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.722531   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.803808   13098 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:08:27.807457   13098 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:08:27.807489   13098 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:08:27.807499   13098 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:08:27.807506   13098 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:08:27.807514   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:08:27.807618   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:08:27.807774   13098 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:08:27.807937   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:08:27.815028   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:27.832481   13098 start.go:309] post-start completed in 179.738586ms
	I0531 11:08:27.832554   13098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:08:27.832607   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.903577   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.985899   13098 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:08:27.990822   13098 fix.go:57] fixHost completed within 2.302646254s
	I0531 11:08:27.990835   13098 start.go:81] releasing machines lock for "old-k8s-version-20220531110241-2169", held for 2.30268259s
	I0531 11:08:27.990918   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061472   13098 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:08:28.061476   13098 ssh_runner.go:195] Run: systemctl --version
	I0531 11:08:28.061544   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061541   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.137038   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.138708   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.362084   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:08:28.375292   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.385346   13098 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:08:28.385407   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:08:28.394958   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:08:28.407962   13098 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:08:28.477039   13098 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:08:28.550358   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.560122   13098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:08:28.629417   13098 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:08:28.639660   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.673402   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.751209   13098 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0531 11:08:28.751364   13098 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220531110241-2169 dig +short host.docker.internal
	I0531 11:08:28.891410   13098 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:08:28.891541   13098 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:08:28.895978   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:28.906678   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.976360   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:28.976426   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.006401   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.006417   13098 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:08:29.006493   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.035658   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.035672   13098 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:08:29.035742   13098 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:08:29.110219   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:29.110231   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:29.110243   13098 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:08:29.110256   13098 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220531110241-2169 NodeName:old-k8s-version-20220531110241-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:08:29.110376   13098 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220531110241-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220531110241-2169
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:08:29.110458   13098 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220531110241-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:08:29.110513   13098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0531 11:08:29.118416   13098 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:08:29.118475   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:08:29.127166   13098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0531 11:08:29.139824   13098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:08:29.152704   13098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0531 11:08:29.167560   13098 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:08:29.171514   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:29.180955   13098 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169 for IP: 192.168.49.2
	I0531 11:08:29.181081   13098 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:08:29.181135   13098 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:08:29.181221   13098 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key
	I0531 11:08:29.181289   13098 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2
	I0531 11:08:29.181350   13098 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key
	I0531 11:08:29.181563   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:08:29.181602   13098 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:08:29.181614   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:08:29.181650   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:08:29.181679   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:08:29.181715   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:08:29.181774   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:29.182294   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:08:29.204256   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:08:29.222547   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:08:29.240162   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:08:29.257426   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:08:29.274651   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:08:29.291982   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:08:29.310334   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:08:29.327611   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:08:29.345041   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:08:29.361815   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:08:29.379584   13098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:08:29.392136   13098 ssh_runner.go:195] Run: openssl version
	I0531 11:08:29.397577   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:08:29.405431   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409147   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409201   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.414250   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:08:29.421379   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:08:29.429280   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433082   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433125   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.438228   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:08:29.445650   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:08:29.453384   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457538   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457576   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.462718   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:08:29.469934   13098 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:29.470029   13098 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:29.501701   13098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:08:29.509593   13098 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:08:29.509612   13098 kubeadm.go:626] restartCluster start
	I0531 11:08:29.509663   13098 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:08:29.516645   13098 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.516701   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:29.586936   13098 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:29.587110   13098 kubeconfig.go:127] "old-k8s-version-20220531110241-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:08:29.587479   13098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:08:29.588780   13098 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:08:29.596409   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.596461   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.604840   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.805295   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.805488   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.816379   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.005006   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.005125   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.014520   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.204919   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.205019   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.214251   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.406956   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.407122   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.417653   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.604950   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.605035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.614470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.805126   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.805274   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.814510   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.006953   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.007113   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.017745   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.206207   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.206350   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.217593   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.404972   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.405096   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.415473   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.606801   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.606929   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.616800   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.805593   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.805718   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.816339   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.005143   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.005270   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.015802   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.204976   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.205118   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.216470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.406933   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.407072   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.417683   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.606963   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.607083   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.617829   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.617839   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.617884   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.628709   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.628721   13098 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:08:32.628730   13098 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:08:32.628795   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:32.656434   13098 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:08:32.666183   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:08:32.673748   13098 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 May 31 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 May 31 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5927 May 31 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 May 31 18:04 /etc/kubernetes/scheduler.conf
	
	I0531 11:08:32.673812   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:08:32.681500   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:08:32.689012   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:08:32.696145   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:08:32.703549   13098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711764   13098 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711775   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:32.763732   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.029517   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.265778182s)
	I0531 11:08:34.029540   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.237331   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.291890   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.348275   13098 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:08:34.348330   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:34.859154   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:35.357249   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:35.859107   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.357652   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.859123   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.359109   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.859168   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.359070   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.857449   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.357079   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.858143   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.359003   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.859087   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.359036   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.859047   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.357140   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.857133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.357195   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.859044   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.357080   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.858240   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:45.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:45.857108   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.357039   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.858005   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.357517   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.856962   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.358073   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.857317   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.356887   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.858909   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:50.358934   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:50.856994   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.358931   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.858801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.356935   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.857770   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.357133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.858875   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.357428   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.856995   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:55.357549   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:55.858840   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.356750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.858862   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.356865   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.858837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.358311   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.858798   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.358828   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.858881   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:00.358750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:00.858856   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.357557   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.858847   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.356665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.858662   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.358757   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.857406   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.358099   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.856720   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:05.358724   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:05.858746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.357258   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.856763   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.357893   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.858727   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.356831   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.857000   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.358665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.858133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:10.357032   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:10.857957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.356952   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.858660   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.357622   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.858640   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.356693   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.858667   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.357353   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.858510   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:15.358636   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:15.856620   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.357157   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.857097   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.356528   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.856738   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.356746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.856987   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.358618   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.858357   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:20.357432   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:20.858551   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.358576   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.857145   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.357177   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.858306   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.356771   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.857014   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.856754   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:25.358119   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:25.857621   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.358516   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.857398   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.358455   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.858462   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.357167   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.856990   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.357801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.857261   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.357437   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.857515   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.358160   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.858408   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.358413   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.857663   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.357837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.857102   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:34.357463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:34.388265   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.388277   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:34.388334   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:34.417576   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.417588   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:34.417644   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:34.446353   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.446366   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:34.446422   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:34.475446   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.475461   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:34.475516   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:34.505125   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.505137   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:34.505192   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:34.533497   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.533509   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:34.533572   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:34.562509   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.562526   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:34.562590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:34.591764   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.591780   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:34.591788   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:34.591795   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:34.630492   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:34.630506   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:34.642193   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:34.642206   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:34.696106   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:34.696117   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:34.696124   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:34.708414   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:34.708426   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:36.762711   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054297817s)
	I0531 11:09:39.263497   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:39.356952   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:39.386768   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.386781   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:39.386842   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:39.417308   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.417321   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:39.417377   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:39.447193   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.447206   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:39.447273   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:39.476858   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.476871   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:39.476925   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:39.505331   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.505343   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:39.505393   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:39.534339   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.534350   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:39.534411   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:39.564150   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.564163   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:39.564226   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:39.593779   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.593792   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:39.593799   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:39.593807   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:39.605961   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:39.605980   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:39.660198   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:39.660212   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:39.660221   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:39.673023   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:39.673035   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:41.727761   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054739505s)
	I0531 11:09:41.727870   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:41.727877   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.270600   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:44.357538   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:44.387750   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.387765   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:44.387828   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:44.417243   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.417256   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:44.417316   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:44.446079   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.446093   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:44.446149   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:44.475402   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.475414   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:44.475474   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:44.504617   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.504631   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:44.504699   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:44.534026   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.534043   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:44.534107   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:44.563392   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.563406   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:44.563466   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:44.591445   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.591457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:44.591464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:44.591470   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.631333   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:44.631348   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:44.643173   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:44.643186   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:44.696709   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:44.696722   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:44.696730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:44.709853   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:44.709866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:46.763128   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053274008s)
	I0531 11:09:49.263476   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:49.356184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:49.386522   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.386534   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:49.386587   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:49.415937   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.415954   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:49.416011   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:49.444575   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.444586   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:49.444640   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:49.473589   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.473602   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:49.473660   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:49.501607   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.501620   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:49.501680   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:49.530816   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.530829   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:49.530905   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:49.561098   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.561110   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:49.561164   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:49.590698   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.590715   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:49.590723   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:49.590730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:49.629663   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:49.629677   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:49.641508   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:49.641539   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:49.696749   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:49.696760   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:49.696771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:49.709171   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:49.709184   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:51.764551   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055377989s)
	I0531 11:09:54.264918   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:54.356039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:54.388392   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.388407   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:54.388479   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:54.421365   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.421378   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:54.421433   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:54.455045   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.455057   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:54.455119   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:54.489207   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.489220   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:54.489279   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:54.521630   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.521643   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:54.521702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:54.551997   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.552012   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:54.552089   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:54.585330   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.585343   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:54.585405   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:54.618689   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.618707   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:54.618719   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:54.618731   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:56.676166   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057446673s)
	I0531 11:09:56.676301   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:56.676310   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:56.717480   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:56.717496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:56.731748   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:56.731762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:56.784506   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:56.784518   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:56.784525   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.299028   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:59.356233   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:59.387581   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.387594   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:59.387648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:59.416950   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.416965   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:59.417026   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:59.445994   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.446006   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:59.446066   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:59.474706   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.474719   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:59.474774   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:59.503641   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.503653   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:59.503706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:59.532168   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.532183   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:59.532238   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:59.561842   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.561855   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:59.561916   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:59.590504   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.590516   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:59.590522   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:59.590529   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:59.629633   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:59.629647   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:59.641945   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:59.641959   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:59.696474   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:59.696490   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:59.696496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.709878   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:59.709892   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:01.764080   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054200644s)
	I0531 11:10:04.265110   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:04.356351   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:04.389088   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.389101   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:04.389161   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:04.418896   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.418909   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:04.418978   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:04.447037   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.447050   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:04.447113   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:04.476510   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.476525   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:04.476584   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:04.504763   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.504776   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:04.504830   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:04.533804   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.533816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:04.533874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:04.563500   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.563513   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:04.563570   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:04.592999   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.593012   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:04.593019   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:04.593025   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:04.631360   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:04.631374   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:04.643433   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:04.643448   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:04.696754   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:04.696772   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:04.696779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:04.708788   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:04.708799   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:06.764822   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056035115s)
	I0531 11:10:09.266997   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:09.356193   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:09.388153   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.388167   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:09.388231   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:09.417585   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.417597   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:09.417653   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:09.449878   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.449891   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:09.449954   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:09.479850   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.479864   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:09.479927   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:09.509485   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.509498   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:09.509561   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:09.540190   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.540204   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:09.540259   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:09.569247   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.569259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:09.569318   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:09.598109   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.598122   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:09.598129   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:09.598136   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:09.638429   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:09.638443   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:09.650114   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:09.650127   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:09.701838   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:09.701849   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:09.701856   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:09.714324   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:09.714337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:11.769141   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054814108s)
	I0531 11:10:14.271474   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:14.357946   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:14.388903   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.388915   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:14.388971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:14.417777   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.417789   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:14.417858   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:14.445824   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.445838   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:14.445899   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:14.475251   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.475263   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:14.475321   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:14.503865   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.503878   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:14.503932   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:14.533523   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.533536   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:14.533594   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:14.562861   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.562874   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:14.562926   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:14.593313   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.593326   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:14.593333   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:14.593340   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:14.647510   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:14.647524   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:14.647531   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:14.659937   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:14.659953   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:16.716744   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056804235s)
	I0531 11:10:16.716857   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:16.716863   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:16.754919   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:16.754931   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.267035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:19.357894   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:19.391000   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.391013   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:19.391069   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:19.419657   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.419668   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:19.419722   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:19.449464   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.449476   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:19.449530   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:19.479823   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.479837   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:19.479896   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:19.509429   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.509443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:19.509523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:19.538786   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.538798   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:19.538853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:19.568183   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.568199   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:19.568256   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:19.598298   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.598311   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:19.598318   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:19.598325   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.610062   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:19.610073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:19.661888   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:19.661899   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:19.661905   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:19.673854   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:19.673866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:21.733389   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059536851s)
	I0531 11:10:21.733494   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:21.733501   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:24.275115   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:24.356493   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:24.386276   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.386290   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:24.386350   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:24.416711   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.416723   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:24.416776   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:24.448608   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.448620   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:24.448673   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:24.478070   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.478085   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:24.478143   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:24.507952   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.507964   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:24.508019   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:24.536910   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.536923   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:24.536976   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:24.565298   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.565309   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:24.565363   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:24.594397   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.594408   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:24.594415   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:24.594421   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:24.646558   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:24.646575   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:24.646582   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:24.658715   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:24.658729   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:26.714683   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055966036s)
	I0531 11:10:26.714790   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:26.714797   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:26.754170   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:26.754183   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:29.268130   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:29.355669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:29.386195   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.386207   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:29.386267   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:29.415255   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.415269   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:29.415327   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:29.445521   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.445533   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:29.445590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:29.474576   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.474590   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:29.474648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:29.503269   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.503283   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:29.503340   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:29.531750   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.531763   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:29.531818   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:29.560522   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.560534   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:29.560588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:29.589986   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.589997   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:29.590004   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:29.590012   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:31.642158   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052159996s)
	I0531 11:10:31.642264   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:31.642271   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:31.680540   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:31.680560   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:31.693978   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:31.693995   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:31.750664   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:31.750676   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:31.750683   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.264743   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:34.355629   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:34.389804   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.389817   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:34.389879   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:34.421065   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.421078   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:34.421133   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:34.450506   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.450525   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:34.450588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:34.480274   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.480286   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:34.480339   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:34.509810   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.509825   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:34.509885   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:34.547728   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.547741   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:34.547797   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:34.577758   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.577770   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:34.577824   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:34.607647   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.607660   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:34.607666   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:34.607673   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:34.646813   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:34.646827   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:34.659116   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:34.659131   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:34.711878   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:34.711895   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:34.711902   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.723823   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:34.723835   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:36.778445   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054619098s)
	I0531 11:10:39.278826   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:39.355843   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:39.386690   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.386705   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:39.386759   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:39.415159   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.415171   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:39.415229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:39.451994   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.452007   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:39.452062   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:39.480982   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.480996   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:39.481053   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:39.509323   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.509336   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:39.509390   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:39.537420   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.537432   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:39.537489   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:39.565876   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.565889   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:39.565942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:39.596336   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.596347   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:39.596354   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:39.596361   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:39.653266   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:39.653276   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:39.653284   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:39.665996   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:39.666008   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:41.723703   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057708131s)
	I0531 11:10:41.723821   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:41.723829   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:41.762214   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:41.762228   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.274433   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:44.355956   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:44.388561   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.388573   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:44.388631   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:44.418528   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.418540   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:44.418596   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:44.448209   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.448228   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:44.448287   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:44.476717   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.476731   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:44.476794   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:44.506060   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.506073   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:44.506127   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:44.535489   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.535502   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:44.535556   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:44.566115   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.566126   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:44.566195   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:44.595347   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.595359   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:44.595366   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:44.595373   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:44.635087   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:44.635104   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.648064   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:44.648084   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:44.702705   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:44.702715   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:44.702725   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:44.715262   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:44.715275   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:46.769384   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05412268s)
	I0531 11:10:49.269850   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:49.356095   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:49.389116   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.389130   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:49.389189   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:49.418954   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.418966   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:49.419021   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:49.448672   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.448684   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:49.448748   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:49.477673   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.477685   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:49.477741   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:49.506658   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.506673   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:49.506736   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:49.535844   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.535856   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:49.535912   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:49.564691   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.564704   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:49.564757   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:49.594090   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.594102   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:49.594109   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:49.594116   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:49.634714   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:49.634727   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:49.646653   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:49.646666   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:49.699411   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:49.699421   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:49.699428   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:49.712418   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:49.712430   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:51.767720   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055302033s)
	I0531 11:10:54.268584   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:54.355312   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:54.402717   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.402746   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:54.402853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:54.471995   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.472008   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:54.472076   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:54.519373   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.519388   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:54.519452   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:54.561548   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.561561   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:54.561618   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:54.591345   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.591357   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:54.591412   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:54.640864   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.640879   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:54.640945   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:54.671790   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.671803   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:54.671857   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:54.706884   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.706895   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:54.706903   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:54.706911   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:56.760836   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053934934s)
	I0531 11:10:56.760940   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:56.760946   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:56.799437   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:56.799452   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:56.813095   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:56.813109   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:56.865931   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:56.865942   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:56.865949   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:59.378503   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:59.856449   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:59.886711   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.886723   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:59.886777   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:59.917269   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.917283   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:59.917349   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:59.953208   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.953222   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:59.953295   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:59.985163   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.985175   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:59.985230   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:00.019546   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.019559   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:00.019619   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:00.048681   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.048694   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:00.048750   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:00.080858   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.080875   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:00.080942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:00.116240   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.116252   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:00.116258   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:00.116267   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:00.129973   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:00.129986   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:00.191716   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:00.191728   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:00.191748   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:00.207100   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:00.207112   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:02.269342   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062241719s)
	I0531 11:11:02.269451   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:02.269458   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:04.814644   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:04.855355   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:04.899388   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.899403   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:04.899460   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:04.931294   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.931308   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:04.931372   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:04.966850   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.966868   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:04.966930   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:05.006753   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.006766   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:05.006825   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:05.035514   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.035528   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:05.035581   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:05.071606   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.071618   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:05.071679   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:05.113543   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.113558   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:05.113622   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:05.158389   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.158403   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:05.158412   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:05.158420   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:05.209536   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:05.209555   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:05.226226   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:05.226244   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:05.293642   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:05.293653   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:05.293661   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:05.314581   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:05.314597   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:07.372712   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058122008s)
	I0531 11:11:09.873521   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:10.356773   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:10.386073   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.386085   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:10.386139   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:10.415320   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.415332   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:10.415399   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:10.444338   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.444352   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:10.444410   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:10.472812   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.472823   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:10.472880   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:10.500902   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.500914   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:10.500971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:10.530609   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.530621   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:10.530672   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:10.561973   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.561987   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:10.562047   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:10.591600   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.591611   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:10.591618   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:10.591625   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:10.648762   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:10.648773   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:10.648779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:10.660930   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:10.660942   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:12.715163   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054233595s)
	I0531 11:11:12.715268   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:12.715274   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:12.757025   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:12.757041   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.269700   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:15.355475   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:15.385163   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.385180   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:15.385236   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:15.417139   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.417153   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:15.417210   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:15.447785   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.447798   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:15.447864   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:15.476820   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.476832   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:15.476893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:15.506443   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.506459   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:15.506517   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:15.535403   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.535422   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:15.535490   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:15.563398   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.563411   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:15.563468   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:15.592213   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.592225   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:15.592238   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:15.592245   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:15.631327   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:15.631342   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.642726   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:15.642740   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:15.694280   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:15.694292   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:15.694300   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:15.706180   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:15.706192   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:17.759941   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053759496s)
	I0531 11:11:20.260199   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:20.357081   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:20.391524   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.391536   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:20.391588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:20.420970   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.420982   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:20.421037   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:20.452134   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.452148   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:20.452206   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:20.483165   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.483176   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:20.483217   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:20.512821   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.512834   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:20.512892   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:20.543804   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.543816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:20.543877   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:20.575838   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.575850   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:20.575908   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:20.607187   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.607200   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:20.607206   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:20.607214   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:20.620268   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:20.620287   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:20.683805   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:20.683818   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:20.683825   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:20.696565   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:20.696583   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:22.757052   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060481864s)
	I0531 11:11:22.757167   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:22.757175   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:25.296888   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:25.356633   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:25.388153   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.388166   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:25.388229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:25.417984   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.417997   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:25.418052   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:25.447364   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.447376   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:25.447432   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:25.475704   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.475718   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:25.475772   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:25.504817   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.504830   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:25.504882   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:25.534188   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.534200   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:25.534255   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:25.562856   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.562868   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:25.562922   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:25.592490   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.592503   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:25.592509   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:25.592517   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:25.604749   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:25.604762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:25.657748   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:25.657758   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:25.657765   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:25.669778   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:25.669790   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:27.727458   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057680964s)
	I0531 11:11:27.727570   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:27.727577   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:30.268792   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:30.355702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:30.385351   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.385362   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:30.385416   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:30.416692   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.416704   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:30.416756   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:30.446080   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.446092   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:30.446148   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:30.475837   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.475850   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:30.475904   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:30.505855   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.505866   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:30.505919   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:30.534660   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.534673   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:30.534735   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:30.563972   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.563985   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:30.564039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:30.593062   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.593075   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:30.593082   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:30.593089   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:30.604860   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:30.604873   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:30.657067   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:30.657079   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:30.657087   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:30.669385   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:30.669397   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:32.725632   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056248231s)
	I0531 11:11:32.725738   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:32.725745   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:35.265482   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:35.356955   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:35.388680   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.388693   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:35.388746   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:35.418234   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.418247   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:35.418306   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:35.448424   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.448436   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:35.448488   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:35.477114   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.477126   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:35.477183   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:35.507149   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.507160   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:35.507222   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:35.536636   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.536648   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:35.536706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:35.566077   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.566089   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:35.566147   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:35.596667   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.596680   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:35.596686   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:35.596693   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:37.649220   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052538655s)
	I0531 11:11:37.649329   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:37.649337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:37.690050   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:37.690063   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:37.701532   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:37.701545   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:37.754370   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:37.754382   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:37.754389   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.266957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:40.356874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:40.387551   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.387563   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:40.387617   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:40.416687   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.416699   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:40.416751   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:40.446274   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.446288   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:40.446341   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:40.477123   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.477138   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:40.477196   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:40.507689   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.507702   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:40.507752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:40.538333   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.538346   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:40.538398   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:40.568456   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.568468   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:40.568524   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:40.598870   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.598883   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:40.598891   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:40.598898   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:40.637605   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:40.637623   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:40.650027   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:40.650045   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:40.702714   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:40.702727   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:40.702734   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.715145   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:40.715158   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:42.769567   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054421687s)
	I0531 11:11:45.271767   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:45.354742   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:45.384335   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.384348   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:45.384402   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:45.415481   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.415493   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:45.415567   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:45.444878   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.444892   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:45.444964   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:45.474544   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.474557   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:45.474616   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:45.504114   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.504126   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:45.504184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:45.532825   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.532838   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:45.532893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:45.561687   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.561699   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:45.561752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:45.592123   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.592136   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:45.592143   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:45.592149   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:45.631894   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:45.631908   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:45.643759   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:45.643771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:45.743249   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:45.743266   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:45.743273   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:45.755246   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:45.755258   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:47.813698   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058453882s)
	I0531 11:11:50.316034   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:50.355463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:50.385123   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.385136   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:50.385190   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:50.414943   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.414957   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:50.415012   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:50.443429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.443441   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:50.443498   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:50.472680   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.472693   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:50.472747   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:50.501429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.501443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:50.501501   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:50.531478   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.531489   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:50.531545   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:50.563245   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.563259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:50.563317   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:50.593840   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.593852   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:50.593858   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:50.593865   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:50.661648   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:50.661658   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:50.661667   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:50.673634   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:50.673646   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:52.731947   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058312875s)
	I0531 11:11:52.732053   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:52.732060   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:52.771014   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:52.771030   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:55.283215   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:55.354561   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:55.386892   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.386904   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:55.386963   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:55.417758   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.417772   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:55.417829   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:55.448756   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.448769   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:55.448826   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:55.483671   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.483685   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:55.483744   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:55.514477   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.514487   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:55.514555   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:55.544537   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.544548   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:55.544607   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:55.573745   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.573759   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:55.573817   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:55.606613   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.606628   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:55.606637   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:55.606644   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:57.665311   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058678422s)
	I0531 11:11:57.665420   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:57.665427   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:57.704653   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:57.704668   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:57.718596   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:57.718610   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:57.773458   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:57.773476   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:57.773490   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:00.286492   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:00.354651   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:00.392079   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.392091   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:00.392150   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:00.423305   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.423318   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:00.423376   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:00.453590   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.453602   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:00.453664   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:00.484435   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.484448   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:00.484500   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:00.516556   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.516567   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:00.516608   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:00.545969   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.545981   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:00.546032   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:00.574540   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.574554   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:00.574612   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:00.604665   13098 logs.go:274] 0 containers: []
	W0531 11:12:00.604676   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:00.604682   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:00.604689   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:00.616996   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:00.617012   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:02.671151   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054152001s)
	I0531 11:12:02.671262   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:02.671269   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:02.710349   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:02.710362   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:02.722447   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:02.722463   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:02.775933   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:05.276053   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:05.354602   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:05.412997   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.413010   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:05.413065   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:05.446411   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.446427   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:05.446531   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:05.490771   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.490783   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:05.490838   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:05.522431   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.522443   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:05.522499   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:05.560546   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.560562   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:05.560626   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:05.604168   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.604180   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:05.604237   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:05.636323   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.636337   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:05.636391   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:05.682018   13098 logs.go:274] 0 containers: []
	W0531 11:12:05.682041   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:05.682052   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:05.682084   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:05.724061   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:05.724074   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:05.737161   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:05.737174   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:05.792193   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:05.792210   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:05.792216   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:05.811898   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:05.811912   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:07.872307   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06040274s)
	I0531 11:12:10.372638   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:10.854580   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:10.902029   13098 logs.go:274] 0 containers: []
	W0531 11:12:10.902044   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:10.902115   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:10.936815   13098 logs.go:274] 0 containers: []
	W0531 11:12:10.936834   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:10.936895   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:10.972402   13098 logs.go:274] 0 containers: []
	W0531 11:12:10.972441   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:10.972499   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:11.009665   13098 logs.go:274] 0 containers: []
	W0531 11:12:11.009684   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:11.009800   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:11.043070   13098 logs.go:274] 0 containers: []
	W0531 11:12:11.043087   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:11.043148   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:11.076389   13098 logs.go:274] 0 containers: []
	W0531 11:12:11.076403   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:11.076468   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:11.108551   13098 logs.go:274] 0 containers: []
	W0531 11:12:11.108564   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:11.108619   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:11.142357   13098 logs.go:274] 0 containers: []
	W0531 11:12:11.142372   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:11.142380   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:11.142388   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:11.212207   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:11.212225   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:11.212232   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:11.225426   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:11.225446   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:13.294542   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.069107388s)
	I0531 11:12:13.294690   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:13.294714   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:13.337580   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:13.337597   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:15.853736   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:16.355362   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:16.385793   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.385805   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:16.385862   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:16.415536   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.415550   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:16.415609   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:16.446042   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.446056   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:16.446118   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:16.476942   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.476955   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:16.477014   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:16.506699   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.506712   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:16.506775   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:16.538212   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.538223   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:16.538274   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:16.568607   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.568619   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:16.568675   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:16.598480   13098 logs.go:274] 0 containers: []
	W0531 11:12:16.598491   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:16.598498   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:16.598518   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:16.651650   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:16.651663   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:16.651676   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:16.664503   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:16.664516   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:18.718730   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054227228s)
	I0531 11:12:18.718841   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:18.718848   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:18.759450   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:18.759472   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:21.272383   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:21.354735   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:21.387913   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.387926   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:21.387983   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:21.416669   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.416682   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:21.416738   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:21.446613   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.446625   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:21.446681   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:21.476509   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.476522   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:21.476583   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:21.506713   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.506725   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:21.506779   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:21.537476   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.537490   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:21.537548   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:21.569560   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.569572   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:21.569628   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:21.600970   13098 logs.go:274] 0 containers: []
	W0531 11:12:21.600983   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:21.600991   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:21.600998   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:21.614091   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:21.614104   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:21.671396   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:21.671408   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:21.671415   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:21.683868   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:21.683880   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:23.738761   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054894427s)
	I0531 11:12:23.738904   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:23.738910   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:26.281017   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:26.354525   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:26.396000   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.396015   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:26.396073   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:26.424559   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.424572   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:26.424627   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:26.470265   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.470283   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:26.470349   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:26.503437   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.503449   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:26.503503   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:26.542812   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.542829   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:26.542892   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:26.579982   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.580010   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:26.580082   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:26.612591   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.612604   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:26.612659   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:26.658907   13098 logs.go:274] 0 containers: []
	W0531 11:12:26.658919   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:26.658928   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:26.658936   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:26.671905   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:26.671928   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:26.725866   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:26.725880   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:26.725887   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:26.738833   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:26.738847   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:28.804360   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065523728s)
	I0531 11:12:28.804479   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:28.804486   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:31.355962   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:31.855862   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:12:31.896434   13098 logs.go:274] 0 containers: []
	W0531 11:12:31.896450   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:12:31.896516   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:12:31.926007   13098 logs.go:274] 0 containers: []
	W0531 11:12:31.926019   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:12:31.926077   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:12:31.956559   13098 logs.go:274] 0 containers: []
	W0531 11:12:31.956573   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:12:31.956648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:12:31.987794   13098 logs.go:274] 0 containers: []
	W0531 11:12:31.987808   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:12:31.987869   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:12:32.020553   13098 logs.go:274] 0 containers: []
	W0531 11:12:32.020566   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:12:32.020622   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:12:32.050058   13098 logs.go:274] 0 containers: []
	W0531 11:12:32.050069   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:12:32.050123   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:12:32.081347   13098 logs.go:274] 0 containers: []
	W0531 11:12:32.081362   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:12:32.081417   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:12:32.110484   13098 logs.go:274] 0 containers: []
	W0531 11:12:32.110495   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:12:32.110502   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:12:32.110510   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:12:32.148911   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:12:32.148924   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:12:32.160912   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:12:32.160925   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:12:32.214397   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:12:32.214407   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:12:32.214413   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:12:32.226637   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:12:32.226649   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:12:34.280787   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054151883s)
	I0531 11:12:36.781160   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:12:36.790463   13098 kubeadm.go:630] restartCluster took 4m7.283846799s
	W0531 11:12:36.790547   13098 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0531 11:12:36.790561   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:12:37.211648   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:12:37.221938   13098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:12:37.229593   13098 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:12:37.229642   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:12:37.237029   13098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:12:37.237054   13098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:12:37.961839   13098 out.go:204]   - Generating certificates and keys ...
	I0531 11:12:38.847011   13098 out.go:204]   - Booting up control plane ...
	W0531 11:14:33.760041   13098 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 11:14:33.760073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:14:34.182940   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:14:34.192616   13098 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:14:34.192666   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:14:34.200294   13098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:14:34.200312   13098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:14:34.901603   13098 out.go:204]   - Generating certificates and keys ...
	I0531 11:14:36.104005   13098 out.go:204]   - Booting up control plane ...
	I0531 11:16:31.020061   13098 kubeadm.go:397] StartCluster complete in 8m1.555975545s
	I0531 11:16:31.020140   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:16:31.050974   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.050987   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:16:31.051042   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:16:31.080367   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.080379   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:16:31.080436   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:16:31.109454   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.109467   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:16:31.109523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:16:31.138029   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.138040   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:16:31.138093   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:16:31.168696   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.168708   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:16:31.168763   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:16:31.198083   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.198100   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:16:31.198162   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:16:31.226599   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.226611   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:16:31.226669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:16:31.256444   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.256457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:16:31.256464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:16:31.256471   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:16:31.295837   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:16:31.295851   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:16:31.307624   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:16:31.307639   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:16:31.359917   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:16:31.359927   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:16:31.359936   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:16:31.372199   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:16:31.372211   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:16:33.427067   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054868747s)
	W0531 11:16:33.427193   13098 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 11:16:33.427208   13098 out.go:239] * 
	* 
	W0531 11:16:33.427350   13098 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.427367   13098 out.go:239] * 
	* 
	W0531 11:16:33.427900   13098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 11:16:33.489529   13098 out.go:177] 
	W0531 11:16:33.531716   13098 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.531846   13098 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 11:16:33.531898   13098 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 11:16:33.573528   13098 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220531110241-2169 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:08:26.190082098Z",
	            "FinishedAt": "2022-05-31T18:08:23.336567271Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "49bd121b76d28de5c01cec5b2b9b781e9e3115310e778c754e0a43752d617ff2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/49bd121b76d2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "4a1e8f65e10d901150ca70abb003401b842c1eb5fb0be5bb24a9c98ec896642f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
E0531 11:16:33.970556    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (437.69722ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25: (3.633296212s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	| start   | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	| start   | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:03 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                        |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | logs -n 25                                        |                                        |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | logs -n 25                                        |                                        |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169        | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169        | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                        |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169        | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169        | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:13:10
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:13:10.912075   13553 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:13:10.912340   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912345   13553 out.go:309] Setting ErrFile to fd 2...
	I0531 11:13:10.912349   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912452   13553 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:13:10.912710   13553 out.go:303] Setting JSON to false
	I0531 11:13:10.927550   13553 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4359,"bootTime":1654016431,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:13:10.927657   13553 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:13:10.950011   13553 out.go:177] * [embed-certs-20220531111208-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:13:10.992542   13553 notify.go:193] Checking for updates...
	I0531 11:13:11.014435   13553 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:13:11.057209   13553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:11.078751   13553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:13:11.100576   13553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:13:11.122489   13553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:13:11.145156   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:11.145842   13553 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:13:11.217087   13553 docker.go:137] docker version: linux-20.10.14
	I0531 11:13:11.217221   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.343566   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.291646587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.387031   13553 out.go:177] * Using the docker driver based on existing profile
	I0531 11:13:11.408143   13553 start.go:284] selected driver: docker
	I0531 11:13:11.408166   13553 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.408292   13553 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:13:11.410542   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.535319   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.48504376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.535472   13553 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:13:11.535492   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:11.535500   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:11.535513   13553 start_flags.go:306] config:
	{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.579012   13553 out.go:177] * Starting control plane node embed-certs-20220531111208-2169 in cluster embed-certs-20220531111208-2169
	I0531 11:13:11.600345   13553 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:13:11.622204   13553 out.go:177] * Pulling base image ...
	I0531 11:13:11.664279   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:11.664367   13553 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:13:11.664355   13553 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:13:11.664398   13553 cache.go:57] Caching tarball of preloaded images
	I0531 11:13:11.664595   13553 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:13:11.664618   13553 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:13:11.665489   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:11.728631   13553 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:13:11.728650   13553 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:13:11.728661   13553 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:13:11.728716   13553 start.go:352] acquiring machines lock for embed-certs-20220531111208-2169: {Name:mk6b884d6089a1578cdaf488d7f8fffed1b73a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:13:11.728792   13553 start.go:356] acquired machines lock for "embed-certs-20220531111208-2169" in 57.599µs
	I0531 11:13:11.728839   13553 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:13:11.728846   13553 fix.go:55] fixHost starting: 
	I0531 11:13:11.729063   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:11.794705   13553 fix.go:103] recreateIfNeeded on embed-certs-20220531111208-2169: state=Stopped err=<nil>
	W0531 11:13:11.794739   13553 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:13:11.838307   13553 out.go:177] * Restarting existing docker container for "embed-certs-20220531111208-2169" ...
	I0531 11:13:11.859598   13553 cli_runner.go:164] Run: docker start embed-certs-20220531111208-2169
	I0531 11:13:12.207124   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:12.278559   13553 kic.go:416] container "embed-certs-20220531111208-2169" state is running.
	I0531 11:13:12.279154   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.351999   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:12.352414   13553 machine.go:88] provisioning docker machine ...
	I0531 11:13:12.352438   13553 ubuntu.go:169] provisioning hostname "embed-certs-20220531111208-2169"
	I0531 11:13:12.352499   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.426073   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.426254   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.426271   13553 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531111208-2169 && echo "embed-certs-20220531111208-2169" | sudo tee /etc/hostname
	I0531 11:13:12.546985   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531111208-2169
	
	I0531 11:13:12.547055   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.667019   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.667153   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.667167   13553 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531111208-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531111208-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531111208-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:13:12.778841   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:12.778871   13553 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:13:12.778892   13553 ubuntu.go:177] setting up certificates
	I0531 11:13:12.778902   13553 provision.go:83] configureAuth start
	I0531 11:13:12.778963   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.851177   13553 provision.go:138] copyHostCerts
	I0531 11:13:12.851272   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:13:12.851284   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:13:12.851409   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:13:12.851635   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:13:12.851644   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:13:12.851702   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:13:12.851836   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:13:12.851845   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:13:12.851899   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:13:12.852005   13553 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531111208-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531111208-2169]
	I0531 11:13:13.012300   13553 provision.go:172] copyRemoteCerts
	I0531 11:13:13.012367   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:13:13.012411   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.083950   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.163687   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:13:13.181984   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:13:13.202769   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:13:13.220771   13553 provision.go:86] duration metric: configureAuth took 441.859262ms
	I0531 11:13:13.220785   13553 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:13:13.220931   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:13.220996   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.290761   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.290928   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.290938   13553 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:13:13.403887   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:13:13.403899   13553 ubuntu.go:71] root file system type: overlay
	I0531 11:13:13.404028   13553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:13:13.404100   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.473905   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.474051   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.474101   13553 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:13:13.592185   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:13:13.592261   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.662203   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.662343   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.662357   13553 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:13:13.777966   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:13.777983   13553 machine.go:91] provisioned docker machine in 1.425577512s
	I0531 11:13:13.777991   13553 start.go:306] post-start starting for "embed-certs-20220531111208-2169" (driver="docker")
	I0531 11:13:13.777998   13553 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:13:13.778067   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:13:13.778116   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.848237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.932021   13553 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:13:13.935470   13553 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:13:13.935482   13553 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:13:13.935489   13553 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:13:13.935497   13553 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:13:13.935504   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:13:13.935616   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:13:13.935749   13553 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:13:13.935898   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:13:13.942941   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:13.960034   13553 start.go:309] post-start completed in 182.035145ms
	I0531 11:13:13.960102   13553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:13:13.960153   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.029714   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.110010   13553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:13:14.114571   13553 fix.go:57] fixHost completed within 2.385751879s
	I0531 11:13:14.114581   13553 start.go:81] releasing machines lock for "embed-certs-20220531111208-2169", held for 2.385811827s
	I0531 11:13:14.114647   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:14.183914   13553 ssh_runner.go:195] Run: systemctl --version
	I0531 11:13:14.183932   13553 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:13:14.183988   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.183999   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.259237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.261186   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.338523   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:13:14.475654   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.485255   13553 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:13:14.485320   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:13:14.495800   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:13:14.508692   13553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:13:14.578970   13553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:13:14.646485   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.656123   13553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:13:14.719480   13553 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:13:14.729422   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.764747   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.842937   13553 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:13:14.843097   13553 cli_runner.go:164] Run: docker exec -t embed-certs-20220531111208-2169 dig +short host.docker.internal
	I0531 11:13:14.980735   13553 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:13:14.980851   13553 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:13:14.985209   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:14.995189   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.066041   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:15.066120   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.099230   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.099246   13553 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:13:15.099322   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.128293   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.128309   13553 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:13:15.128404   13553 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:13:15.201388   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:15.201399   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:15.201412   13553 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:13:15.201426   13553 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531111208-2169 NodeName:embed-certs-20220531111208-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:13:15.201536   13553 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220531111208-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:13:15.201613   13553 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220531111208-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:13:15.201672   13553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:13:15.209154   13553 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:13:15.209203   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:13:15.216165   13553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0531 11:13:15.228487   13553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:13:15.241530   13553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0531 11:13:15.253811   13553 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:13:15.257550   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:15.266790   13553 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169 for IP: 192.168.58.2
	I0531 11:13:15.266894   13553 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:13:15.266943   13553 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:13:15.267029   13553 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/client.key
	I0531 11:13:15.267089   13553 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key.cee25041
	I0531 11:13:15.267135   13553 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key
	I0531 11:13:15.267327   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:13:15.267368   13553 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:13:15.267379   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:13:15.267410   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:13:15.267442   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:13:15.267475   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:13:15.267531   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:15.268077   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:13:15.286065   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:13:15.303481   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:13:15.320612   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:13:15.338097   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:13:15.354546   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:13:15.370990   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:13:15.387662   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:13:15.404111   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:13:15.420738   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:13:15.437866   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:13:15.454492   13553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:13:15.467247   13553 ssh_runner.go:195] Run: openssl version
	I0531 11:13:15.472671   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:13:15.480357   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484359   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484403   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.489653   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:13:15.496718   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:13:15.504292   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508441   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508479   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.513962   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:13:15.521223   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:13:15.529012   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533202   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533243   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.538555   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:13:15.545740   13553 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:15.545831   13553 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:15.574436   13553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:13:15.582575   13553 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:13:15.582590   13553 kubeadm.go:626] restartCluster start
	I0531 11:13:15.582637   13553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:13:15.589452   13553 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.589508   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.658995   13553 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531111208-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:15.659167   13553 kubeconfig.go:127] "embed-certs-20220531111208-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:13:15.659511   13553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:13:15.660882   13553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:13:15.668383   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.668428   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.676694   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.878830   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.879010   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.890181   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.078851   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.079013   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.089798   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.277533   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.277607   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.287591   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.478834   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.478983   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.490523   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.678854   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.679031   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.689622   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.878660   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.878750   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.890310   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.078866   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.079037   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.089324   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.278043   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.278132   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.287546   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.478843   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.478972   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.489976   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.677174   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.677289   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.687745   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.878762   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.878861   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.889375   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.076807   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.076876   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.085455   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.278341   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.278493   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.289377   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.476936   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.477072   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.486862   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.677619   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.677776   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.688531   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.688541   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.688589   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.696891   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.696907   13553 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:13:18.696918   13553 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:13:18.696972   13553 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:18.727258   13553 docker.go:442] Stopping containers: [a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56]
	I0531 11:13:18.727328   13553 ssh_runner.go:195] Run: docker stop a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56
	I0531 11:13:18.758625   13553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:13:18.769960   13553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:13:18.778599   13553 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 18:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 18:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:12 /etc/kubernetes/scheduler.conf
	
	I0531 11:13:18.778676   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:13:18.786996   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:13:18.795010   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.802414   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.802469   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.810402   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:13:18.818706   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.818775   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:13:18.825849   13553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833007   13553 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833017   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:18.877378   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:19.935016   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057629084s)
	I0531 11:13:19.935035   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.058140   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.103466   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.152115   13553 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:13:20.152176   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:20.663756   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.164475   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.215674   13553 api_server.go:71] duration metric: took 1.063576049s to wait for apiserver process to appear ...
	I0531 11:13:21.215692   13553 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:13:21.215704   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:21.216920   13553 api_server.go:256] stopped: https://127.0.0.1:52733/healthz: Get "https://127.0.0.1:52733/healthz": EOF
	I0531 11:13:21.718992   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.167314   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:13:24.167334   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:13:24.217141   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.222557   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.222574   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:24.719071   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.726442   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.726459   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.216999   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.222726   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:25.222741   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.717101   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.724848   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 200:
	ok
	I0531 11:13:25.732599   13553 api_server.go:140] control plane version: v1.23.6
	I0531 11:13:25.732611   13553 api_server.go:130] duration metric: took 4.516969769s to wait for apiserver health ...
	I0531 11:13:25.732616   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:25.732621   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:25.732632   13553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:13:25.741842   13553 system_pods.go:59] 8 kube-system pods found
	I0531 11:13:25.741859   13553 system_pods.go:61] "coredns-64897985d-45rxk" [1d1af550-c7eb-4d3d-a99e-ea74b583e84d] Running
	I0531 11:13:25.741863   13553 system_pods.go:61] "etcd-embed-certs-20220531111208-2169" [8b0ce277-ff5a-4e5b-b019-42c569689abb] Running
	I0531 11:13:25.741867   13553 system_pods.go:61] "kube-apiserver-embed-certs-20220531111208-2169" [b2087c02-761e-4919-8b92-9c3ae53f2821] Running
	I0531 11:13:25.741876   13553 system_pods.go:61] "kube-controller-manager-embed-certs-20220531111208-2169" [a56fc9fd-2eee-4f73-904d-0de881e33d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:13:25.741881   13553 system_pods.go:61] "kube-proxy-lgwn5" [9aad1763-1139-4bed-8c7d-a956e68d3386] Running
	I0531 11:13:25.741885   13553 system_pods.go:61] "kube-scheduler-embed-certs-20220531111208-2169" [9297a013-1420-42ab-8c26-7352aca786b3] Running
	I0531 11:13:25.741890   13553 system_pods.go:61] "metrics-server-b955d9d8-jbxp2" [ad7ca455-4720-4932-95d3-703a51595cb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:13:25.741895   13553 system_pods.go:61] "storage-provisioner" [d7df490e-a02b-4db2-912b-0d64caf0924b] Running
	I0531 11:13:25.741900   13553 system_pods.go:74] duration metric: took 9.263068ms to wait for pod list to return data ...
	I0531 11:13:25.741905   13553 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:13:25.745283   13553 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:13:25.745301   13553 node_conditions.go:123] node cpu capacity is 6
	I0531 11:13:25.745322   13553 node_conditions.go:105] duration metric: took 3.412768ms to run NodePressure ...
	I0531 11:13:25.745359   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:26.023161   13553 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027527   13553 kubeadm.go:777] kubelet initialised
	I0531 11:13:26.027540   13553 kubeadm.go:778] duration metric: took 4.364923ms waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027549   13553 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:13:26.034285   13553 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086851   13553 pod_ready.go:92] pod "coredns-64897985d-45rxk" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.086875   13553 pod_ready.go:81] duration metric: took 52.574215ms waiting for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086892   13553 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093072   13553 pod_ready.go:92] pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.093082   13553 pod_ready.go:81] duration metric: took 6.180628ms waiting for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093089   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099122   13553 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.099133   13553 pod_ready.go:81] duration metric: took 6.039477ms waiting for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099139   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:28.146822   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:30.645890   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:33.144139   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:34.643120   13553 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.643133   13553 pod_ready.go:81] duration metric: took 8.544092302s waiting for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.643140   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647169   13553 pod_ready.go:92] pod "kube-proxy-lgwn5" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.647176   13553 pod_ready.go:81] duration metric: took 4.0327ms waiting for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647182   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:36.657938   13553 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:37.157814   13553 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:37.157828   13553 pod_ready.go:81] duration metric: took 2.510670323s waiting for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:37.157835   13553 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:39.168841   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:41.170734   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:43.669021   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:45.669098   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:47.671012   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:50.168445   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:52.170563   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:54.668999   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:57.170207   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:59.170988   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:01.670072   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:03.670082   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:06.167570   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:08.167638   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:10.169806   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:12.670944   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:15.169753   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:17.667759   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:19.670165   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:21.670624   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:24.168819   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:26.669956   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:28.670940   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	W0531 11:14:33.760041   13098 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 11:14:33.760073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:14:34.182940   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:14:34.192616   13098 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:14:34.192666   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:14:34.200294   13098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:14:34.200312   13098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:14:31.169348   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:33.668612   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:34.901603   13098 out.go:204]   - Generating certificates and keys ...
	I0531 11:14:36.104005   13098 out.go:204]   - Booting up control plane ...
	I0531 11:14:36.168890   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:38.669937   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:41.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:43.168404   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:45.668439   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:47.670324   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:50.169250   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:52.666875   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:54.666872   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:56.667829   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:58.668136   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:00.668681   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:03.166600   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:05.168912   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:07.668122   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:09.669752   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:12.167201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:14.169370   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:16.665960   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:18.669950   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:21.166794   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:23.168174   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:25.666653   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:27.669565   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:30.166540   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:32.168336   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:34.666131   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:36.668357   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:38.669065   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:41.168929   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:43.667201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:45.667894   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:48.166425   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:50.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:52.666884   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:54.667615   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:56.667805   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:59.169231   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:01.665509   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:03.669718   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:06.165318   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:08.166716   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:10.167687   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:12.668552   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:15.166558   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:17.167341   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:19.665622   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:21.667455   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:24.169405   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:26.667161   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:29.166126   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:31.020061   13098 kubeadm.go:397] StartCluster complete in 8m1.555975545s
	I0531 11:16:31.020140   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:16:31.050974   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.050987   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:16:31.051042   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:16:31.080367   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.080379   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:16:31.080436   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:16:31.109454   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.109467   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:16:31.109523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:16:31.138029   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.138040   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:16:31.138093   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:16:31.168696   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.168708   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:16:31.168763   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:16:31.198083   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.198100   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:16:31.198162   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:16:31.226599   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.226611   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:16:31.226669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:16:31.256444   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.256457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:16:31.256464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:16:31.256471   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:16:31.295837   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:16:31.295851   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:16:31.307624   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:16:31.307639   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:16:31.359917   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:16:31.359927   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:16:31.359936   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:16:31.372199   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:16:31.372211   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:16:33.427067   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054868747s)
	W0531 11:16:33.427193   13098 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 11:16:33.427208   13098 out.go:239] * 
	W0531 11:16:33.427350   13098 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.427367   13098 out.go:239] * 
	W0531 11:16:33.427900   13098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 11:16:33.489529   13098 out.go:177] 
	W0531 11:16:33.531716   13098 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.531846   13098 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 11:16:33.531898   13098 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 11:16:33.573528   13098 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:16:35 UTC. --
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Starting Docker Application Container Engine...
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.442700177Z" level=info msg="Starting up"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445540309Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445580709Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445602670Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445613401Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447324824Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447356391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447369067Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447375179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.454861167Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.459158936Z" level=info msg="Loading containers: start."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.541211721Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.574193816Z" level=info msg="Loading containers: done."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582853381Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582916167Z" level=info msg="Daemon has completed initialization"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.603971346Z" level=info msg="API listen on [::]:2376"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.609838771Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-05-31T18:16:37Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:16:37 up  1:04,  0 users,  load average: 0.47, 0.85, 1.06
	Linux old-k8s-version-20220531110241-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:16:37 UTC. --
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: I0531 18:16:36.935306   14389 server.go:410] Version: v1.16.0
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: I0531 18:16:36.935499   14389 plugins.go:100] No cloud provider specified.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: I0531 18:16:36.935512   14389 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: I0531 18:16:36.937250   14389 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: W0531 18:16:36.937930   14389 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: W0531 18:16:36.937999   14389 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:16:36 old-k8s-version-20220531110241-2169 kubelet[14389]: F0531 18:16:36.938030   14389 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:16:36 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: I0531 18:16:37.683283   14421 server.go:410] Version: v1.16.0
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: I0531 18:16:37.683488   14421 plugins.go:100] No cloud provider specified.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: I0531 18:16:37.683498   14421 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: I0531 18:16:37.685262   14421 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: W0531 18:16:37.687694   14421 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: W0531 18:16:37.687767   14421 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:16:37 old-k8s-version-20220531110241-2169 kubelet[14421]: F0531 18:16:37.687795   14421 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:16:37 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:16:37 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:16:37.531293   13646 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (455.613055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220531110241-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (493.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220531110349-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
E0531 11:11:33.976270    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169: exit status 2 (16.101184263s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169: exit status 2 (16.099033055s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220531110349-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20220531110349-2169 --alsologtostderr -v=1: (1.07583793s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531110349-2169
helpers_test.go:235: (dbg) docker inspect no-preload-20220531110349-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a",
	        "Created": "2022-05-31T18:03:51.07276327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:05:03.886070335Z",
	            "FinishedAt": "2022-05-31T18:05:01.990570695Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/hosts",
	        "LogPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a-json.log",
	        "Name": "/no-preload-20220531110349-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531110349-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531110349-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531110349-2169",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531110349-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531110349-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531110349-2169",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531110349-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f416093250515b12442e22c72d9a1a37327425dccabe2432ce68e9a32a4bb19",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51693"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51694"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51695"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51696"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51697"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4f4160932505",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531110349-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c49110de8401",
	                        "no-preload-20220531110349-2169"
	                    ],
	                    "NetworkID": "8f956b17300170310409428d6088c5b2b67174350067b9d66aeee84ee79b99e9",
	                    "EndpointID": "0953aff0bbbe269cf4ee651c4b3cfb348a6a18ea2d3f2c2a6e5bd9cd81becebf",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220531110349-2169 logs -n 25
E0531 11:11:57.113313    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220531110349-2169 logs -n 25: (2.717969538s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p calico-20220531104927-2169                     | calico-20220531104927-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:00 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p cilium-20220531104927-2169                     | cilium-20220531104927-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:00 PDT |
	| delete  | -p calico-20220531104927-2169                     | calico-20220531104927-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:00 PDT |
	| start   | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:01 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                        |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	| start   | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:01 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                        |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	| start   | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	| start   | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:03 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                        |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:08:24
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:08:24.864423   13098 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:08:24.864582   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864588   13098 out.go:309] Setting ErrFile to fd 2...
	I0531 11:08:24.864592   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864692   13098 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:08:24.864985   13098 out.go:303] Setting JSON to false
	I0531 11:08:24.879863   13098 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4073,"bootTime":1654016431,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:08:24.879988   13098 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:08:24.902035   13098 out.go:177] * [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:08:24.945014   13098 notify.go:193] Checking for updates...
	I0531 11:08:24.966436   13098 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:08:24.987830   13098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:25.009108   13098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:08:25.030802   13098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:08:25.052040   13098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:08:25.074525   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:25.096505   13098 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0531 11:08:25.117749   13098 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:08:25.191594   13098 docker.go:137] docker version: linux-20.10.14
	I0531 11:08:25.191723   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.317616   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.254314323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.361153   13098 out.go:177] * Using the docker driver based on existing profile
	I0531 11:08:25.382343   13098 start.go:284] selected driver: docker
	I0531 11:08:25.382377   13098 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.382520   13098 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:08:25.385966   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.513518   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.450743823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.513684   13098 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:08:25.513704   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:25.513712   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:25.513726   13098 start_flags.go:306] config:
	{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.535504   13098 out.go:177] * Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	I0531 11:08:25.561264   13098 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:08:25.582190   13098 out.go:177] * Pulling base image ...
	I0531 11:08:25.624028   13098 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:08:25.624035   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:25.624091   13098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 11:08:25.624108   13098 cache.go:57] Caching tarball of preloaded images
	I0531 11:08:25.624296   13098 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:08:25.624329   13098 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 11:08:25.625057   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:25.688021   13098 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:08:25.688038   13098 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:08:25.688049   13098 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:08:25.688095   13098 start.go:352] acquiring machines lock for old-k8s-version-20220531110241-2169: {Name:mkde0b1c8a03f8862b5675925132e687b92ccd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:08:25.688173   13098 start.go:356] acquired machines lock for "old-k8s-version-20220531110241-2169" in 55.993µs
	I0531 11:08:25.688192   13098 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:08:25.688224   13098 fix.go:55] fixHost starting: 
	I0531 11:08:25.688466   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:25.755111   13098 fix.go:103] recreateIfNeeded on old-k8s-version-20220531110241-2169: state=Stopped err=<nil>
	W0531 11:08:25.755155   13098 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:08:25.797678   13098 out.go:177] * Restarting existing docker container for "old-k8s-version-20220531110241-2169" ...
	I0531 11:08:24.128522   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:26.625231   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:25.818473   13098 cli_runner.go:164] Run: docker start old-k8s-version-20220531110241-2169
	I0531 11:08:26.192165   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:26.263698   13098 kic.go:416] container "old-k8s-version-20220531110241-2169" state is running.
	I0531 11:08:26.264351   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.337917   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:26.338311   13098 machine.go:88] provisioning docker machine ...
	I0531 11:08:26.338340   13098 ubuntu.go:169] provisioning hostname "old-k8s-version-20220531110241-2169"
	I0531 11:08:26.338453   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.410821   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.411035   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.411048   13098 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220531110241-2169 && echo "old-k8s-version-20220531110241-2169" | sudo tee /etc/hostname
	I0531 11:08:26.530934   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220531110241-2169
	
	I0531 11:08:26.531026   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.602777   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.602942   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.602957   13098 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220531110241-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220531110241-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220531110241-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:08:26.716578   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:26.716599   13098 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:08:26.716617   13098 ubuntu.go:177] setting up certificates
	I0531 11:08:26.716625   13098 provision.go:83] configureAuth start
	I0531 11:08:26.716695   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.787003   13098 provision.go:138] copyHostCerts
	I0531 11:08:26.787080   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:08:26.787096   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:08:26.787190   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:08:26.787413   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:08:26.787423   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:08:26.787482   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:08:26.787625   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:08:26.787631   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:08:26.787687   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:08:26.787803   13098 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220531110241-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220531110241-2169]
	I0531 11:08:26.886368   13098 provision.go:172] copyRemoteCerts
	I0531 11:08:26.886424   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:08:26.886475   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.957750   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.039830   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0531 11:08:27.059791   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:08:27.076499   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:08:27.095218   13098 provision.go:86] duration metric: configureAuth took 378.579892ms
	I0531 11:08:27.095231   13098 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:08:27.095385   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:27.095451   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.165741   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.165895   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.165906   13098 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:08:27.275339   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:08:27.275354   13098 ubuntu.go:71] root file system type: overlay
	I0531 11:08:27.275532   13098 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:08:27.275598   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.345524   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.345724   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.345774   13098 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:08:27.466818   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:08:27.466905   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.537313   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.537482   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.537496   13098 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:08:27.652716   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:27.652730   13098 machine.go:91] provisioned docker machine in 1.314427116s
	I0531 11:08:27.652737   13098 start.go:306] post-start starting for "old-k8s-version-20220531110241-2169" (driver="docker")
	I0531 11:08:27.652741   13098 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:08:27.652808   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:08:27.652850   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.722531   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.803808   13098 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:08:27.807457   13098 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:08:27.807489   13098 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:08:27.807499   13098 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:08:27.807506   13098 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:08:27.807514   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:08:27.807618   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:08:27.807774   13098 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:08:27.807937   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:08:27.815028   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:27.832481   13098 start.go:309] post-start completed in 179.738586ms
	I0531 11:08:27.832554   13098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:08:27.832607   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.903577   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.985899   13098 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:08:27.990822   13098 fix.go:57] fixHost completed within 2.302646254s
	I0531 11:08:27.990835   13098 start.go:81] releasing machines lock for "old-k8s-version-20220531110241-2169", held for 2.30268259s
	I0531 11:08:27.990918   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061472   13098 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:08:28.061476   13098 ssh_runner.go:195] Run: systemctl --version
	I0531 11:08:28.061544   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061541   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.137038   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.138708   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.362084   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:08:28.375292   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.385346   13098 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:08:28.385407   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:08:28.394958   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:08:28.407962   13098 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:08:28.477039   13098 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:08:28.550358   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.560122   13098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:08:28.629417   13098 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:08:28.639660   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.673402   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.751209   13098 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0531 11:08:28.751364   13098 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220531110241-2169 dig +short host.docker.internal
	I0531 11:08:28.891410   13098 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:08:28.891541   13098 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:08:28.895978   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:28.906678   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.976360   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:28.976426   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.006401   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.006417   13098 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:08:29.006493   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.035658   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.035672   13098 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:08:29.035742   13098 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:08:29.110219   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:29.110231   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:29.110243   13098 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:08:29.110256   13098 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220531110241-2169 NodeName:old-k8s-version-20220531110241-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:08:29.110376   13098 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220531110241-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220531110241-2169
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:08:29.110458   13098 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220531110241-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:08:29.110513   13098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0531 11:08:29.118416   13098 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:08:29.118475   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:08:29.127166   13098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0531 11:08:29.139824   13098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:08:29.152704   13098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0531 11:08:29.167560   13098 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:08:29.171514   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:29.180955   13098 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169 for IP: 192.168.49.2
	I0531 11:08:29.181081   13098 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:08:29.181135   13098 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:08:29.181221   13098 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key
	I0531 11:08:29.181289   13098 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2
	I0531 11:08:29.181350   13098 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key
	I0531 11:08:29.181563   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:08:29.181602   13098 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:08:29.181614   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:08:29.181650   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:08:29.181679   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:08:29.181715   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:08:29.181774   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:29.182294   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:08:29.204256   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:08:29.222547   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:08:29.240162   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:08:29.257426   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:08:29.274651   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:08:29.291982   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:08:29.310334   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:08:29.327611   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:08:29.345041   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:08:29.361815   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:08:29.379584   13098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:08:29.392136   13098 ssh_runner.go:195] Run: openssl version
	I0531 11:08:29.397577   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:08:29.405431   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409147   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409201   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.414250   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:08:29.421379   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:08:29.429280   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433082   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433125   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.438228   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:08:29.445650   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:08:29.453384   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457538   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457576   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.462718   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:08:29.469934   13098 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:29.470029   13098 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:29.501701   13098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:08:29.509593   13098 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:08:29.509612   13098 kubeadm.go:626] restartCluster start
	I0531 11:08:29.509663   13098 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:08:29.516645   13098 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.516701   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:29.586936   13098 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:29.587110   13098 kubeconfig.go:127] "old-k8s-version-20220531110241-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:08:29.587479   13098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:08:29.588780   13098 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:08:29.596409   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.596461   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.604840   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.805295   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.805488   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.816379   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:28.626455   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:31.126569   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:30.005006   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.005125   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.014520   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.204919   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.205019   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.214251   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.406956   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.407122   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.417653   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.604950   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.605035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.614470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.805126   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.805274   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.814510   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.006953   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.007113   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.017745   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.206207   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.206350   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.217593   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.404972   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.405096   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.415473   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.606801   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.606929   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.616800   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.805593   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.805718   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.816339   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.005143   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.005270   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.015802   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.204976   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.205118   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.216470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.406933   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.407072   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.417683   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.606963   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.607083   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.617829   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.617839   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.617884   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.628709   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.628721   13098 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:08:32.628730   13098 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:08:32.628795   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:32.656434   13098 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:08:32.666183   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:08:32.673748   13098 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 May 31 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 May 31 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5927 May 31 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 May 31 18:04 /etc/kubernetes/scheduler.conf
	
	I0531 11:08:32.673812   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:08:32.681500   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:08:32.689012   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:08:32.696145   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:08:32.703549   13098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711764   13098 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711775   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:32.763732   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.029517   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.265778182s)
	I0531 11:08:34.029540   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.237331   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.291890   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.348275   13098 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:08:34.348330   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:34.859154   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:33.625705   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:35.626278   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:37.628129   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:35.357249   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:35.859107   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.357652   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.859123   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.359109   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.859168   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.359070   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.857449   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.357079   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.858143   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.127313   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:42.627670   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:40.359003   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.859087   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.359036   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.859047   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.357140   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.857133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.357195   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.859044   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.357080   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.858240   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.627804   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:46.628678   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:45.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:45.857108   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.357039   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.858005   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.357517   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.856962   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.358073   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.857317   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.356887   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.858909   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.126737   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:51.628829   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:50.358934   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:50.856994   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.358931   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.858801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.356935   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.857770   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.357133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.858875   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.357428   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.856995   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.126925   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:56.628426   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:55.357549   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:55.858840   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.356750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.858862   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.356865   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.858837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.358311   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.858798   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.358828   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.858881   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.129005   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:01.628260   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:00.358750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:00.858856   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.357557   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.858847   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.356665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.858662   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.358757   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.857406   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.358099   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.856720   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.628429   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:06.128506   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:05.358724   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:05.858746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.357258   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.856763   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.357893   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.858727   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.356831   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.857000   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.358665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.858133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.627169   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:10.650647   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:10.357032   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:10.857957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.356952   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.858660   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.357622   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.858640   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.356693   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.858667   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.357353   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.858510   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.127305   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:15.627020   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:15.358636   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:15.856620   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.357157   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.857097   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.356528   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.856738   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.356746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.856987   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.358618   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.858357   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.127594   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:20.628797   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:20.357432   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:20.858551   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.358576   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.857145   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.357177   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.858306   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.356771   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.857014   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.856754   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.126591   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:25.625261   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:27.627552   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:25.358119   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:25.857621   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.358516   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.857398   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.358455   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.858462   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.357167   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.856990   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.357801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.857261   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.124338   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:32.124447   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:30.357437   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.857515   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.358160   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.858408   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.358413   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.857663   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.357837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.857102   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:34.357463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:34.388265   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.388277   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:34.388334   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:34.417576   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.417588   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:34.417644   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:34.446353   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.446366   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:34.446422   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:34.475446   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.475461   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:34.475516   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:34.505125   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.505137   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:34.505192   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:34.533497   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.533509   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:34.533572   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:34.562509   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.562526   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:34.562590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:34.591764   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.591780   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:34.591788   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:34.591795   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:34.630492   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:34.630506   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:34.642193   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:34.642206   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:34.696106   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:34.696117   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:34.696124   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:34.708414   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:34.708426   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:34.125011   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:36.126579   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:36.762711   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054297817s)
	I0531 11:09:39.263497   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:39.356952   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:39.386768   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.386781   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:39.386842   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:39.417308   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.417321   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:39.417377   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:39.447193   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.447206   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:39.447273   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:39.476858   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.476871   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:39.476925   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:39.505331   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.505343   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:39.505393   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:39.534339   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.534350   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:39.534411   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:39.564150   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.564163   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:39.564226   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:39.593779   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.593792   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:39.593799   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:39.593807   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:39.605961   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:39.605980   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:39.660198   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:39.660212   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:39.660221   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:39.673023   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:39.673035   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:38.625485   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:40.627636   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:41.727761   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054739505s)
	I0531 11:09:41.727870   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:41.727877   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.270600   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:44.357538   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:44.387750   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.387765   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:44.387828   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:44.417243   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.417256   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:44.417316   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:44.446079   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.446093   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:44.446149   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:44.475402   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.475414   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:44.475474   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:44.504617   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.504631   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:44.504699   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:44.534026   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.534043   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:44.534107   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:44.563392   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.563406   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:44.563466   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:44.591445   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.591457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:44.591464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:44.591470   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.631333   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:44.631348   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:44.643173   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:44.643186   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:44.696709   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:44.696722   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:44.696730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:44.709853   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:44.709866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:43.125303   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:45.126063   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:47.628073   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:46.763128   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053274008s)
	I0531 11:09:49.263476   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:49.356184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:49.386522   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.386534   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:49.386587   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:49.415937   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.415954   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:49.416011   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:49.444575   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.444586   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:49.444640   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:49.473589   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.473602   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:49.473660   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:49.501607   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.501620   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:49.501680   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:49.530816   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.530829   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:49.530905   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:49.561098   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.561110   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:49.561164   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:49.590698   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.590715   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:49.590723   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:49.590730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:49.629663   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:49.629677   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:49.641508   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:49.641539   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:49.696749   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:49.696760   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:49.696771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:49.709171   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:49.709184   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:50.125867   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:52.127516   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:51.764551   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055377989s)
	I0531 11:09:54.264918   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:54.356039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:54.388392   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.388407   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:54.388479   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:54.421365   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.421378   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:54.421433   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:54.455045   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.455057   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:54.455119   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:54.489207   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.489220   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:54.489279   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:54.521630   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.521643   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:54.521702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:54.551997   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.552012   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:54.552089   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:54.585330   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.585343   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:54.585405   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:54.618689   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.618707   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:54.618719   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:54.618731   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:53.620597   12940 pod_ready.go:81] duration metric: took 4m0.400347067s waiting for pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace to be "Ready" ...
	E0531 11:09:53.620644   12940 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:09:53.620672   12940 pod_ready.go:38] duration metric: took 4m35.821645397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:09:53.620704   12940 kubeadm.go:630] restartCluster took 4m46.551999937s
	W0531 11:09:53.620833   12940 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:09:53.620860   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:09:56.676166   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057446673s)
	I0531 11:09:56.676301   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:56.676310   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:56.717480   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:56.717496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:56.731748   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:56.731762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:56.784506   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:56.784518   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:56.784525   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.299028   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:59.356233   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:59.387581   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.387594   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:59.387648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:59.416950   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.416965   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:59.417026   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:59.445994   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.446006   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:59.446066   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:59.474706   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.474719   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:59.474774   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:59.503641   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.503653   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:59.503706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:59.532168   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.532183   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:59.532238   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:59.561842   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.561855   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:59.561916   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:59.590504   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.590516   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:59.590522   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:59.590529   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:59.629633   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:59.629647   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:59.641945   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:59.641959   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:59.696474   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:59.696490   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:59.696496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.709878   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:59.709892   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:01.764080   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054200644s)
	I0531 11:10:04.265110   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:04.356351   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:04.389088   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.389101   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:04.389161   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:04.418896   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.418909   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:04.418978   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:04.447037   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.447050   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:04.447113   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:04.476510   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.476525   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:04.476584   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:04.504763   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.504776   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:04.504830   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:04.533804   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.533816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:04.533874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:04.563500   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.563513   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:04.563570   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:04.592999   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.593012   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:04.593019   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:04.593025   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:04.631360   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:04.631374   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:04.643433   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:04.643448   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:04.696754   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:04.696772   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:04.696779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:04.708788   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:04.708799   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:06.764822   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056035115s)
	I0531 11:10:09.266997   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:09.356193   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:09.388153   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.388167   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:09.388231   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:09.417585   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.417597   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:09.417653   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:09.449878   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.449891   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:09.449954   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:09.479850   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.479864   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:09.479927   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:09.509485   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.509498   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:09.509561   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:09.540190   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.540204   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:09.540259   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:09.569247   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.569259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:09.569318   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:09.598109   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.598122   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:09.598129   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:09.598136   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:09.638429   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:09.638443   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:09.650114   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:09.650127   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:09.701838   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:09.701849   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:09.701856   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:09.714324   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:09.714337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:11.769141   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054814108s)
	I0531 11:10:14.271474   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:14.357946   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:14.388903   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.388915   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:14.388971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:14.417777   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.417789   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:14.417858   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:14.445824   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.445838   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:14.445899   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:14.475251   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.475263   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:14.475321   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:14.503865   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.503878   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:14.503932   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:14.533523   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.533536   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:14.533594   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:14.562861   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.562874   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:14.562926   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:14.593313   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.593326   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:14.593333   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:14.593340   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:14.647510   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:14.647524   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:14.647531   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:14.659937   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:14.659953   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:16.716744   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056804235s)
	I0531 11:10:16.716857   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:16.716863   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:16.754919   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:16.754931   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.267035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:19.357894   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:19.391000   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.391013   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:19.391069   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:19.419657   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.419668   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:19.419722   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:19.449464   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.449476   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:19.449530   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:19.479823   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.479837   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:19.479896   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:19.509429   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.509443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:19.509523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:19.538786   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.538798   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:19.538853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:19.568183   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.568199   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:19.568256   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:19.598298   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.598311   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:19.598318   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:19.598325   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.610062   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:19.610073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:19.661888   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:19.661899   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:19.661905   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:19.673854   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:19.673866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:21.733389   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059536851s)
	I0531 11:10:21.733494   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:21.733501   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:24.275115   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:24.356493   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:24.386276   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.386290   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:24.386350   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:24.416711   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.416723   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:24.416776   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:24.448608   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.448620   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:24.448673   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:24.478070   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.478085   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:24.478143   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:24.507952   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.507964   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:24.508019   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:24.536910   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.536923   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:24.536976   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:24.565298   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.565309   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:24.565363   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:24.594397   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.594408   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:24.594415   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:24.594421   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:24.646558   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:24.646575   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:24.646582   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:24.658715   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:24.658729   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:26.714683   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055966036s)
	I0531 11:10:26.714790   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:26.714797   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:26.754170   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:26.754183   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:29.268130   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:29.355669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:29.386195   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.386207   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:29.386267   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:29.415255   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.415269   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:29.415327   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:29.445521   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.445533   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:29.445590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:29.474576   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.474590   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:29.474648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:29.503269   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.503283   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:29.503340   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:29.531750   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.531763   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:29.531818   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:29.560522   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.560534   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:29.560588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:29.589986   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.589997   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:29.590004   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:29.590012   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:31.969790   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.349383075s)
	I0531 11:10:31.969847   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:31.979794   12940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:10:31.987417   12940 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:10:31.987474   12940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:10:31.994661   12940 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:10:31.994688   12940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:10:32.492460   12940 out.go:204]   - Generating certificates and keys ...
	I0531 11:10:31.642158   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052159996s)
	I0531 11:10:31.642264   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:31.642271   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:31.680540   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:31.680560   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:31.693978   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:31.693995   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:31.750664   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:31.750676   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:31.750683   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.264743   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:34.355629   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:34.389804   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.389817   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:34.389879   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:34.421065   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.421078   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:34.421133   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:34.450506   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.450525   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:34.450588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:34.480274   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.480286   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:34.480339   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:34.509810   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.509825   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:34.509885   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:34.547728   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.547741   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:34.547797   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:34.577758   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.577770   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:34.577824   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:34.607647   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.607660   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:34.607666   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:34.607673   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:34.646813   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:34.646827   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:34.659116   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:34.659131   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:34.711878   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:34.711895   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:34.711902   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.723823   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:34.723835   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:33.618772   12940 out.go:204]   - Booting up control plane ...
	I0531 11:10:36.778445   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054619098s)
	I0531 11:10:39.278826   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:39.355843   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:39.386690   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.386705   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:39.386759   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:39.415159   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.415171   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:39.415229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:39.451994   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.452007   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:39.452062   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:39.480982   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.480996   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:39.481053   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:39.509323   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.509336   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:39.509390   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:39.537420   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.537432   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:39.537489   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:39.565876   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.565889   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:39.565942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:39.596336   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.596347   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:39.596354   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:39.596361   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:39.653266   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:39.653276   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:39.653284   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:39.665996   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:39.666008   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:40.188826   12940 out.go:204]   - Configuring RBAC rules ...
	I0531 11:10:40.564114   12940 cni.go:95] Creating CNI manager for ""
	I0531 11:10:40.564126   12940 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:10:40.564150   12940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:10:40.564229   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:40.564240   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531110349-2169 minikube.k8s.io/updated_at=2022_05_31T11_10_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:40.734108   12940 ops.go:34] apiserver oom_adj: -16
	I0531 11:10:40.734198   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.356952   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.856602   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:42.356807   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.723703   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057708131s)
	I0531 11:10:41.723821   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:41.723829   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:41.762214   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:41.762228   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.274433   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:44.355956   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:44.388561   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.388573   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:44.388631   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:44.418528   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.418540   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:44.418596   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:44.448209   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.448228   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:44.448287   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:44.476717   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.476731   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:44.476794   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:44.506060   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.506073   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:44.506127   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:44.535489   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.535502   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:44.535556   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:44.566115   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.566126   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:44.566195   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:44.595347   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.595359   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:44.595366   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:44.595373   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:44.635087   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:44.635104   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.648064   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:44.648084   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:44.702705   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:44.702715   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:44.702725   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:44.715262   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:44.715275   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:42.856378   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:43.356331   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:43.856492   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:44.356267   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:44.858359   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:45.356651   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:45.856283   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.357540   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.856218   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:47.357035   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.769384   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05412268s)
	I0531 11:10:49.269850   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:49.356095   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:49.389116   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.389130   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:49.389189   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:49.418954   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.418966   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:49.419021   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:49.448672   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.448684   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:49.448748   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:49.477673   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.477685   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:49.477741   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:49.506658   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.506673   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:49.506736   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:49.535844   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.535856   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:49.535912   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:49.564691   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.564704   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:49.564757   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:49.594090   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.594102   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:49.594109   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:49.594116   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:49.634714   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:49.634727   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:49.646653   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:49.646666   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:49.699411   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:49.699421   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:49.699428   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:49.712418   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:49.712430   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:47.857137   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:48.356441   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:48.856134   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:49.356342   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:49.856351   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:50.357506   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:50.856154   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:51.356302   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:51.857335   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:52.356579   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:52.856094   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:53.356131   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:53.409894   12940 kubeadm.go:1045] duration metric: took 12.845886553s to wait for elevateKubeSystemPrivileges.
	I0531 11:10:53.409909   12940 kubeadm.go:397] StartCluster complete in 5m46.379280219s
	I0531 11:10:53.409926   12940 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:10:53.410003   12940 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:10:53.410518   12940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:10:53.925611   12940 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531110349-2169" rescaled to 1
	I0531 11:10:53.925646   12940 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:10:53.925699   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:10:53.925711   12940 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:10:53.925889   12940 config.go:178] Loaded profile config "no-preload-20220531110349-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:10:53.948507   12940 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.948520   12940 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.948526   12940 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531110349-2169"
	I0531 11:10:53.948526   12940 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531110349-2169"
	W0531 11:10:53.948534   12940 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:10:53.948425   12940 out.go:177] * Verifying Kubernetes components...
	I0531 11:10:53.948548   12940 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531110349-2169"
	I0531 11:10:53.948538   12940 addons.go:65] Setting dashboard=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.989215   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:53.948540   12940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531110349-2169"
	W0531 11:10:53.948564   12940 addons.go:165] addon metrics-server should already be in state true
	I0531 11:10:53.989216   12940 addons.go:153] Setting addon dashboard=true in "no-preload-20220531110349-2169"
	W0531 11:10:53.989293   12940 addons.go:165] addon dashboard should already be in state true
	I0531 11:10:53.948581   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989299   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989322   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989569   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.989706   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.989726   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.993447   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:54.029958   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.030047   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:10:54.106735   12940 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531110349-2169"
	I0531 11:10:54.143097   12940 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0531 11:10:54.143122   12940 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:10:54.179057   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:54.200136   12940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:10:54.237117   12940 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:10:54.200658   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:54.258122   12940 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:10:54.295196   12940 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:10:54.332187   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:10:54.332258   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:10:51.767720   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055302033s)
	I0531 11:10:54.268584   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:54.355312   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:54.402717   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.402746   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:54.402853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:54.471995   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.472008   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:54.472076   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:54.519373   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.519388   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:54.519452   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:54.561548   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.561561   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:54.561618   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:54.591345   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.591357   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:54.591412   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:54.640864   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.640879   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:54.640945   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:54.671790   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.671803   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:54.671857   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:54.706884   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.706895   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:54.706903   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:54.706911   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:54.332276   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:10:54.332375   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.349325   12940 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531110349-2169" to be "Ready" ...
	I0531 11:10:54.369115   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.369155   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:10:54.369164   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:10:54.369272   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.388239   12940 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:10:54.388268   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:10:54.388356   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.398369   12940 node_ready.go:49] node "no-preload-20220531110349-2169" has status "Ready":"True"
	I0531 11:10:54.398390   12940 node_ready.go:38] duration metric: took 29.340031ms waiting for node "no-preload-20220531110349-2169" to be "Ready" ...
	I0531 11:10:54.398402   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:10:54.407762   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-kr94r" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:54.485063   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.486446   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.493005   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.496616   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.609442   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:10:54.614198   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:10:54.614214   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:10:54.627024   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:10:54.699179   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:10:54.699194   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:10:54.706453   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:10:54.706467   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:10:54.715875   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:10:54.715890   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:10:54.737484   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:10:54.737497   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:10:54.795544   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:10:54.795557   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:10:54.805689   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:10:54.805703   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:10:54.907238   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:10:54.907256   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:10:54.917512   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:10:54.936251   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:10:54.936264   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:10:55.030873   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:10:55.030896   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:10:55.125262   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:10:55.125279   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:10:55.210483   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:10:55.210498   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:10:55.223830   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.19376962s)
	I0531 11:10:55.223855   12940 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:10:55.297323   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:10:55.420835   12940 pod_ready.go:97] error getting pod "coredns-64897985d-kr94r" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kr94r" not found
	I0531 11:10:55.420856   12940 pod_ready.go:81] duration metric: took 1.013083538s waiting for pod "coredns-64897985d-kr94r" in "kube-system" namespace to be "Ready" ...
	E0531 11:10:55.420871   12940 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kr94r" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kr94r" not found
	I0531 11:10:55.420884   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-r9cpx" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:55.444508   12940 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531110349-2169"
	I0531 11:10:56.168362   12940 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:10:56.205410   12940 addons.go:417] enableAddons completed in 2.279750293s
	I0531 11:10:57.431596   12940 pod_ready.go:102] pod "coredns-64897985d-r9cpx" in "kube-system" namespace has status "Ready":"False"
	I0531 11:10:56.760836   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053934934s)
	I0531 11:10:56.760940   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:56.760946   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:56.799437   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:56.799452   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:56.813095   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:56.813109   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:56.865931   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:56.865942   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:56.865949   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:59.378503   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:59.856449   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:58.933788   12940 pod_ready.go:92] pod "coredns-64897985d-r9cpx" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.933803   12940 pod_ready.go:81] duration metric: took 3.51295192s waiting for pod "coredns-64897985d-r9cpx" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.933810   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.941613   12940 pod_ready.go:92] pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.941624   12940 pod_ready.go:81] duration metric: took 7.809186ms waiting for pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.941635   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.950180   12940 pod_ready.go:92] pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.950192   12940 pod_ready.go:81] duration metric: took 8.550026ms waiting for pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.950198   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.956039   12940 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.956051   12940 pod_ready.go:81] duration metric: took 5.847589ms waiting for pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.956058   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pcc2" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.962654   12940 pod_ready.go:92] pod "kube-proxy-2pcc2" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.962667   12940 pod_ready.go:81] duration metric: took 6.602768ms waiting for pod "kube-proxy-2pcc2" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.962673   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:59.329338   12940 pod_ready.go:92] pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:59.329348   12940 pod_ready.go:81] duration metric: took 366.673929ms waiting for pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:59.329353   12940 pod_ready.go:38] duration metric: took 4.930989595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:10:59.329370   12940 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:10:59.329422   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:59.340810   12940 api_server.go:71] duration metric: took 5.415207061s to wait for apiserver process to appear ...
	I0531 11:10:59.340824   12940 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:10:59.340831   12940 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51697/healthz ...
	I0531 11:10:59.346099   12940 api_server.go:266] https://127.0.0.1:51697/healthz returned 200:
	ok
	I0531 11:10:59.347221   12940 api_server.go:140] control plane version: v1.23.6
	I0531 11:10:59.347230   12940 api_server.go:130] duration metric: took 6.40122ms to wait for apiserver health ...
	I0531 11:10:59.347235   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:10:59.532996   12940 system_pods.go:59] 8 kube-system pods found
	I0531 11:10:59.533010   12940 system_pods.go:61] "coredns-64897985d-r9cpx" [fb5cf9cb-7184-4170-934e-1d7cfe1d690e] Running
	I0531 11:10:59.533014   12940 system_pods.go:61] "etcd-no-preload-20220531110349-2169" [ac9f3123-c82c-4739-8910-0d1b91f259b9] Running
	I0531 11:10:59.533019   12940 system_pods.go:61] "kube-apiserver-no-preload-20220531110349-2169" [db4029ec-5ce6-4188-a6b8-048f56eafdaf] Running
	I0531 11:10:59.533023   12940 system_pods.go:61] "kube-controller-manager-no-preload-20220531110349-2169" [2f0ae197-3c23-4ee4-a342-221494979b29] Running
	I0531 11:10:59.533028   12940 system_pods.go:61] "kube-proxy-2pcc2" [b0618709-0f72-4a65-9379-8838a18e826c] Running
	I0531 11:10:59.533033   12940 system_pods.go:61] "kube-scheduler-no-preload-20220531110349-2169" [043aa359-d83f-4904-ad1d-8cc3ce571c62] Running
	I0531 11:10:59.533038   12940 system_pods.go:61] "metrics-server-b955d9d8-xd4wv" [27025e59-a89a-49b0-b7ab-6d9daab5c880] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:10:59.533043   12940 system_pods.go:61] "storage-provisioner" [6f6a080a-3fc2-4d79-b754-6d309648dcd3] Running
	I0531 11:10:59.533047   12940 system_pods.go:74] duration metric: took 185.810453ms to wait for pod list to return data ...
	I0531 11:10:59.533052   12940 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:10:59.729532   12940 default_sa.go:45] found service account: "default"
	I0531 11:10:59.729543   12940 default_sa.go:55] duration metric: took 196.490372ms for default service account to be created ...
	I0531 11:10:59.729549   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:10:59.933433   12940 system_pods.go:86] 8 kube-system pods found
	I0531 11:10:59.933450   12940 system_pods.go:89] "coredns-64897985d-r9cpx" [fb5cf9cb-7184-4170-934e-1d7cfe1d690e] Running
	I0531 11:10:59.933455   12940 system_pods.go:89] "etcd-no-preload-20220531110349-2169" [ac9f3123-c82c-4739-8910-0d1b91f259b9] Running
	I0531 11:10:59.933459   12940 system_pods.go:89] "kube-apiserver-no-preload-20220531110349-2169" [db4029ec-5ce6-4188-a6b8-048f56eafdaf] Running
	I0531 11:10:59.933463   12940 system_pods.go:89] "kube-controller-manager-no-preload-20220531110349-2169" [2f0ae197-3c23-4ee4-a342-221494979b29] Running
	I0531 11:10:59.933466   12940 system_pods.go:89] "kube-proxy-2pcc2" [b0618709-0f72-4a65-9379-8838a18e826c] Running
	I0531 11:10:59.933470   12940 system_pods.go:89] "kube-scheduler-no-preload-20220531110349-2169" [043aa359-d83f-4904-ad1d-8cc3ce571c62] Running
	I0531 11:10:59.933475   12940 system_pods.go:89] "metrics-server-b955d9d8-xd4wv" [27025e59-a89a-49b0-b7ab-6d9daab5c880] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:10:59.933480   12940 system_pods.go:89] "storage-provisioner" [6f6a080a-3fc2-4d79-b754-6d309648dcd3] Running
	I0531 11:10:59.933485   12940 system_pods.go:126] duration metric: took 203.935603ms to wait for k8s-apps to be running ...
	I0531 11:10:59.933490   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:10:59.933540   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:59.944445   12940 system_svc.go:56] duration metric: took 10.951036ms WaitForService to wait for kubelet.
	I0531 11:10:59.944462   12940 kubeadm.go:572] duration metric: took 6.018868929s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:10:59.944478   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:11:00.129933   12940 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:11:00.129945   12940 node_conditions.go:123] node cpu capacity is 6
	I0531 11:11:00.129954   12940 node_conditions.go:105] duration metric: took 185.474872ms to run NodePressure ...
	I0531 11:11:00.129963   12940 start.go:213] waiting for startup goroutines ...
	I0531 11:11:00.161044   12940 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:11:00.181322   12940 out.go:177] * Done! kubectl is now configured to use "no-preload-20220531110349-2169" cluster and "default" namespace by default
	I0531 11:10:59.886711   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.886723   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:59.886777   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:59.917269   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.917283   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:59.917349   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:59.953208   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.953222   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:59.953295   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:59.985163   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.985175   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:59.985230   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:00.019546   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.019559   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:00.019619   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:00.048681   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.048694   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:00.048750   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:00.080858   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.080875   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:00.080942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:00.116240   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.116252   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:00.116258   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:00.116267   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:00.129973   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:00.129986   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:00.191716   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:00.191728   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:00.191748   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:00.207100   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:00.207112   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:02.269342   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062241719s)
	I0531 11:11:02.269451   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:02.269458   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:04.814644   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:04.855355   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:04.899388   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.899403   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:04.899460   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:04.931294   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.931308   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:04.931372   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:04.966850   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.966868   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:04.966930   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:05.006753   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.006766   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:05.006825   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:05.035514   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.035528   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:05.035581   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:05.071606   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.071618   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:05.071679   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:05.113543   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.113558   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:05.113622   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:05.158389   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.158403   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:05.158412   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:05.158420   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:05.209536   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:05.209555   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:05.226226   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:05.226244   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:05.293642   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:05.293653   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:05.293661   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:05.314581   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:05.314597   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:07.372712   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058122008s)
	I0531 11:11:09.873521   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:10.356773   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:10.386073   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.386085   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:10.386139   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:10.415320   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.415332   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:10.415399   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:10.444338   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.444352   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:10.444410   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:10.472812   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.472823   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:10.472880   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:10.500902   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.500914   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:10.500971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:10.530609   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.530621   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:10.530672   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:10.561973   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.561987   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:10.562047   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:10.591600   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.591611   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:10.591618   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:10.591625   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:10.648762   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:10.648773   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:10.648779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:10.660930   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:10.660942   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:12.715163   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054233595s)
	I0531 11:11:12.715268   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:12.715274   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:12.757025   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:12.757041   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.269700   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:15.355475   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:15.385163   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.385180   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:15.385236   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:15.417139   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.417153   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:15.417210   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:15.447785   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.447798   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:15.447864   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:15.476820   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.476832   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:15.476893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:15.506443   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.506459   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:15.506517   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:15.535403   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.535422   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:15.535490   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:15.563398   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.563411   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:15.563468   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:15.592213   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.592225   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:15.592238   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:15.592245   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:15.631327   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:15.631342   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.642726   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:15.642740   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:15.694280   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:15.694292   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:15.694300   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:15.706180   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:15.706192   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:17.759941   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053759496s)
	I0531 11:11:20.260199   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:20.357081   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:20.391524   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.391536   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:20.391588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:20.420970   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.420982   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:20.421037   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:20.452134   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.452148   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:20.452206   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:20.483165   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.483176   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:20.483217   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:20.512821   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.512834   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:20.512892   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:20.543804   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.543816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:20.543877   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:20.575838   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.575850   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:20.575908   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:20.607187   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.607200   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:20.607206   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:20.607214   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:20.620268   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:20.620287   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:20.683805   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:20.683818   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:20.683825   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:20.696565   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:20.696583   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:22.757052   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060481864s)
	I0531 11:11:22.757167   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:22.757175   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:25.296888   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:25.356633   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:25.388153   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.388166   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:25.388229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:25.417984   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.417997   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:25.418052   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:25.447364   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.447376   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:25.447432   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:25.475704   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.475718   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:25.475772   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:25.504817   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.504830   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:25.504882   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:25.534188   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.534200   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:25.534255   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:25.562856   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.562868   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:25.562922   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:25.592490   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.592503   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:25.592509   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:25.592517   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:25.604749   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:25.604762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:25.657748   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:25.657758   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:25.657765   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:25.669778   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:25.669790   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:27.727458   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057680964s)
	I0531 11:11:27.727570   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:27.727577   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:30.268792   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:30.355702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:30.385351   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.385362   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:30.385416   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:30.416692   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.416704   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:30.416756   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:30.446080   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.446092   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:30.446148   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:30.475837   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.475850   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:30.475904   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:30.505855   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.505866   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:30.505919   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:30.534660   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.534673   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:30.534735   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:30.563972   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.563985   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:30.564039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:30.593062   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.593075   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:30.593082   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:30.593089   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:30.604860   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:30.604873   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:30.657067   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:30.657079   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:30.657087   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:30.669385   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:30.669397   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:32.725632   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056248231s)
	I0531 11:11:32.725738   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:32.725745   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:35.265482   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:35.356955   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:35.388680   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.388693   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:35.388746   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:35.418234   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.418247   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:35.418306   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:35.448424   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.448436   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:35.448488   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:35.477114   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.477126   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:35.477183   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:35.507149   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.507160   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:35.507222   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:35.536636   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.536648   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:35.536706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:35.566077   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.566089   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:35.566147   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:35.596667   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.596680   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:35.596686   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:35.596693   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:37.649220   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052538655s)
	I0531 11:11:37.649329   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:37.649337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:37.690050   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:37.690063   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:37.701532   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:37.701545   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:37.754370   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:37.754382   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:37.754389   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.266957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:40.356874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:40.387551   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.387563   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:40.387617   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:40.416687   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.416699   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:40.416751   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:40.446274   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.446288   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:40.446341   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:40.477123   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.477138   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:40.477196   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:40.507689   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.507702   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:40.507752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:40.538333   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.538346   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:40.538398   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:40.568456   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.568468   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:40.568524   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:40.598870   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.598883   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:40.598891   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:40.598898   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:40.637605   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:40.637623   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:40.650027   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:40.650045   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:40.702714   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:40.702727   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:40.702734   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.715145   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:40.715158   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:42.769567   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054421687s)
	I0531 11:11:45.271767   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:45.354742   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:45.384335   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.384348   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:45.384402   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:45.415481   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.415493   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:45.415567   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:45.444878   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.444892   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:45.444964   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:45.474544   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.474557   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:45.474616   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:45.504114   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.504126   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:45.504184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:45.532825   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.532838   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:45.532893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:45.561687   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.561699   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:45.561752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:45.592123   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.592136   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:45.592143   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:45.592149   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:45.631894   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:45.631908   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:45.643759   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:45.643771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:45.743249   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:45.743266   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:45.743273   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:45.755246   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:45.755258   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:47.813698   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058453882s)
	I0531 11:11:50.316034   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:50.355463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:50.385123   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.385136   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:50.385190   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:50.414943   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.414957   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:50.415012   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:50.443429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.443441   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:50.443498   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:50.472680   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.472693   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:50.472747   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:50.501429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.501443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:50.501501   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:50.531478   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.531489   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:50.531545   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:50.563245   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.563259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:50.563317   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:50.593840   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.593852   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:50.593858   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:50.593865   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:50.661648   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:50.661658   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:50.661667   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:50.673634   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:50.673646   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:52.731947   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058312875s)
	I0531 11:11:52.732053   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:52.732060   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:52.771014   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:52.771030   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:05:04 UTC, end at Tue 2022-05-31 18:11:57 UTC. --
	May 31 18:10:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:10.042248991Z" level=info msg="ignoring event" container=376f29d45597bccdec0a4dda410e4aeb172b032ea6797f0e593817068a5013da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:10.192683892Z" level=info msg="ignoring event" container=b871c481e673c56c6f6617aa319f7022a5ac2ebc9f02cc9310ac53d1bc851f06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:10.371370417Z" level=info msg="ignoring event" container=153a5bb7e2e588f9df43a934f88f5d10fb0fe6b51553f1ce5655c03803cccbc9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:20 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:20.438879476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=3dc2f2ab920549d3659cd289cc7b2b744cea982221569ae9b2922a0a55a6c231
	May 31 18:10:20 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:20.490848518Z" level=info msg="ignoring event" container=3dc2f2ab920549d3659cd289cc7b2b744cea982221569ae9b2922a0a55a6c231 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:20 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:20.597940997Z" level=info msg="ignoring event" container=58ff821a53d622f757e07fe49f300947225450a8facc7f294d17806076be3822 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.685124723Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=2fba2b7716ebe019445341ca7b305b76cfb828936fabd67fbb3bc70dafa9c890
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.713579858Z" level=info msg="ignoring event" container=2fba2b7716ebe019445341ca7b305b76cfb828936fabd67fbb3bc70dafa9c890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.809281678Z" level=info msg="ignoring event" container=e367253bc9d8450f64e1201b74dcb7fd8bc245ccb1dcc60ef48afb8962e06ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.904398601Z" level=info msg="ignoring event" container=b723cc39a68f2ad735a46c02d42d51c7881cd89ad90614d82456531237ade7ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:31 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:31.002169613Z" level=info msg="ignoring event" container=a47e12341a57060153a2395fdf05b58b95b6984bc76b0e307a1c90335f529d7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:31 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:31.114366560Z" level=info msg="ignoring event" container=ddb9cd08be659b41200e32869d8ddba40e31ddbbaa6b6718a4b5dcc51f98fefa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:53 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:53.942454999Z" level=info msg="ignoring event" container=325a0b8dc79d2f1e8211b46d72f7fe6467fc4b7020e773670ca57a1ed8f2dc49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.539915388Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.540036016Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.540981076Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:57 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:57.666351328Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:10:57 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:57.968119280Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:11:01 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:01.473186420Z" level=info msg="ignoring event" container=7f16f785e81b1f95fdddb27b7905bf1a7797715467a0e44049a08980b139ddf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:11:01 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:01.504073470Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:11:02 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:02.393454009Z" level=info msg="ignoring event" container=0a40a1e4f58ff48b015ce55776c73610959d80a2431f9616c1bb2bc3171bd464 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.747583874Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.747660580Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.748838030Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:17 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:17.825933841Z" level=info msg="ignoring event" container=f5148b430d0890a100388c0ebd7884924fc7647a7f0ed7dd8f9ac178f95784db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	f5148b430d089       a90209bb39e3d                                                                                    40 seconds ago       Exited              dashboard-metrics-scraper   2                   0c59515a76f58
	f36eb7f887ca6       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   50 seconds ago       Running             kubernetes-dashboard        0                   819a50ab0622d
	2b584a0d4078f       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   8dc157329ae9e
	1e71e41ab1319       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   2fead9389c6ab
	020ee73ec6b03       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   8fa06b665c442
	f38e27da8f8f8       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   ae4c2a351166c
	92491c72e5133       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   842d919e4a2cb
	745b99d504e4d       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   fd31a20ac4384
	9a2a1f0c21c52       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   b4a24a517ab62
	
	* 
	* ==> coredns [1e71e41ab131] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531110349-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531110349-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531110349-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_10_40_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:10:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531110349-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220531110349-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                d5e74baf-ef0e-467f-9551-8b0c3a613a0f
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-r9cpx                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-no-preload-20220531110349-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-no-preload-20220531110349-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-no-preload-20220531110349-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-2pcc2                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220531110349-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-b955d9d8-xd4wv                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-2s5p9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-wzj5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 63s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    83s (x4 over 83s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x3 over 83s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  83s (x4 over 83s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                77s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeReady
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s (x2 over 2s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s (x2 over 2s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s (x2 over 2s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node no-preload-20220531110349-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node no-preload-20220531110349-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [f38e27da8f8f] <==
	* {"level":"info","ts":"2022-05-31T18:10:35.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:10:35.360Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220531110349-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:10:35.456Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:10:35.456Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:11:58 up 59 min,  0 users,  load average: 0.86, 0.99, 1.15
	Linux no-preload-20220531110349-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [92491c72e513] <==
	* I0531 18:10:38.929955       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:10:38.955628       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:10:39.002940       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:10:39.006751       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:10:39.007503       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:10:39.010431       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:10:39.795743       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:10:40.398623       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:10:40.405328       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:10:40.412657       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:10:40.577882       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:10:52.930769       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:10:53.482723       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:10:54.426614       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:10:55.440474       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.100.110.30]
	I0531 18:10:56.113798       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.217.54]
	I0531 18:10:56.125077       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.244.19]
	W0531 18:10:56.339122       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:10:56.339262       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:10:56.339302       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:11:56.295540       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:11:56.295639       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:11:56.295645       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [9a2a1f0c21c5] <==
	* I0531 18:10:55.965232       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 18:10:55.970000       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:55.970298       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:10:55.970333       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.001203       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:10:56.003530       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.003569       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.007428       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.007468       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.010275       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.010309       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:10:56.020851       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-wzj5d"
	I0531 18:10:56.027119       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-2s5p9"
	E0531 18:11:54.676630       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0531 18:11:54.676764       1 event.go:294] "Event occurred" object="no-preload-20220531110349-2169" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220531110349-2169 status is now: NodeNotReady"
	I0531 18:11:54.680111       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0531 18:11:54.683136       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0531 18:11:54.686027       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.690833       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-2pcc2" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.695333       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-r9cpx" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.758312       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.763397       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.768382       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.773870       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0531 18:11:54.773911       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [020ee73ec6b0] <==
	* I0531 18:10:54.240354       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:10:54.240420       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:10:54.240467       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:10:54.422152       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:10:54.422182       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:10:54.422191       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:10:54.422217       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:10:54.423862       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:10:54.424818       1 config.go:317] "Starting service config controller"
	I0531 18:10:54.424843       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:10:54.424868       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:10:54.424873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:10:54.525811       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:10:54.525850       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [745b99d504e4] <==
	* W0531 18:10:37.732605       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:10:37.732621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:10:37.731711       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:37.732747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:37.731700       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:10:37.732759       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:10:37.732786       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:10:37.732959       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:10:37.732818       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:10:37.733023       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:10:37.733496       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:10:37.733526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:10:38.644695       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:38.644732       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:38.651685       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:10:38.651718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:10:38.683758       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:10:38.683794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:10:38.691804       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:10:38.691855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:10:38.725939       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:38.725975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:38.769317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:10:38.769363       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 18:10:39.127565       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:05:04 UTC, end at Tue 2022-05-31 18:11:58 UTC. --
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.277514    7321 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.277564    7321 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.277592    7321 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.277650    7321 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.277681    7321 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.304973    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27025e59-a89a-49b0-b7ab-6d9daab5c880-tmp-dir\") pod \"metrics-server-b955d9d8-xd4wv\" (UID: \"27025e59-a89a-49b0-b7ab-6d9daab5c880\") " pod="kube-system/metrics-server-b955d9d8-xd4wv"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305022    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2d6a943-79eb-4c29-a9d5-6ab70b33fa42-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-2s5p9\" (UID: \"a2d6a943-79eb-4c29-a9d5-6ab70b33fa42\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305040    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f6a080a-3fc2-4d79-b754-6d309648dcd3-tmp\") pod \"storage-provisioner\" (UID: \"6f6a080a-3fc2-4d79-b754-6d309648dcd3\") " pod="kube-system/storage-provisioner"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305059    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9kvq\" (UniqueName: \"kubernetes.io/projected/fb5cf9cb-7184-4170-934e-1d7cfe1d690e-kube-api-access-v9kvq\") pod \"coredns-64897985d-r9cpx\" (UID: \"fb5cf9cb-7184-4170-934e-1d7cfe1d690e\") " pod="kube-system/coredns-64897985d-r9cpx"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305075    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0618709-0f72-4a65-9379-8838a18e826c-kube-proxy\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305088    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0618709-0f72-4a65-9379-8838a18e826c-lib-modules\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305102    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7q2\" (UniqueName: \"kubernetes.io/projected/6f6a080a-3fc2-4d79-b754-6d309648dcd3-kube-api-access-nl7q2\") pod \"storage-provisioner\" (UID: \"6f6a080a-3fc2-4d79-b754-6d309648dcd3\") " pod="kube-system/storage-provisioner"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305117    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2qt\" (UniqueName: \"kubernetes.io/projected/27025e59-a89a-49b0-b7ab-6d9daab5c880-kube-api-access-4q2qt\") pod \"metrics-server-b955d9d8-xd4wv\" (UID: \"27025e59-a89a-49b0-b7ab-6d9daab5c880\") " pod="kube-system/metrics-server-b955d9d8-xd4wv"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305134    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzb4\" (UniqueName: \"kubernetes.io/projected/1e85297b-8675-49ff-bed8-a051aa621a28-kube-api-access-cgzb4\") pod \"kubernetes-dashboard-8469778f77-wzj5d\" (UID: \"1e85297b-8675-49ff-bed8-a051aa621a28\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305147    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0618709-0f72-4a65-9379-8838a18e826c-xtables-lock\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305167    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnjh\" (UniqueName: \"kubernetes.io/projected/b0618709-0f72-4a65-9379-8838a18e826c-kube-api-access-8mnjh\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305183    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e85297b-8675-49ff-bed8-a051aa621a28-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-wzj5d\" (UID: \"1e85297b-8675-49ff-bed8-a051aa621a28\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305196    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf9cb-7184-4170-934e-1d7cfe1d690e-config-volume\") pod \"coredns-64897985d-r9cpx\" (UID: \"fb5cf9cb-7184-4170-934e-1d7cfe1d690e\") " pod="kube-system/coredns-64897985d-r9cpx"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305210    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtsvl\" (UniqueName: \"kubernetes.io/projected/a2d6a943-79eb-4c29-a9d5-6ab70b33fa42-kube-api-access-rtsvl\") pod \"dashboard-metrics-scraper-56974995fc-2s5p9\" (UID: \"a2d6a943-79eb-4c29-a9d5-6ab70b33fa42\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305218    7321 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:57.474147    7321 request.go:665] Waited for 1.174425753s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.550065    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-scheduler-no-preload-20220531110349-2169"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.723189    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-apiserver-no-preload-20220531110349-2169"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.936688    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220531110349-2169"
	May 31 18:11:58 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:58.077954    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220531110349-2169\" already exists" pod="kube-system/etcd-no-preload-20220531110349-2169"
	
	* 
	* ==> kubernetes-dashboard [f36eb7f887ca] <==
	* 2022/05/31 18:11:08 Using namespace: kubernetes-dashboard
	2022/05/31 18:11:08 Using in-cluster config to connect to apiserver
	2022/05/31 18:11:08 Using secret token for csrf signing
	2022/05/31 18:11:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:11:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:11:08 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:11:08 Generating JWE encryption key
	2022/05/31 18:11:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:11:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:11:08 Initializing JWE encryption key from synchronized object
	2022/05/31 18:11:08 Creating in-cluster Sidecar client
	2022/05/31 18:11:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:11:08 Serving insecurely on HTTP port: 9090
	2022/05/31 18:11:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:11:08 Starting overwatch
	
	* 
	* ==> storage-provisioner [2b584a0d4078] <==
	* I0531 18:10:56.301581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:10:56.309072       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:10:56.309101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:10:56.314323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:10:56.314428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff3eac1c-c091-4429-8906-238a6d563305", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851 became leader
	I0531 18:10:56.314455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851!
	I0531 18:10:56.415576       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-xd4wv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv: exit status 1 (295.159285ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-xd4wv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531110349-2169
helpers_test.go:235: (dbg) docker inspect no-preload-20220531110349-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a",
	        "Created": "2022-05-31T18:03:51.07276327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:05:03.886070335Z",
	            "FinishedAt": "2022-05-31T18:05:01.990570695Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/hosts",
	        "LogPath": "/var/lib/docker/containers/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a/c49110de8401818023bbfc97a052c9eb5c797d3848fab88f59f8cc128a6f799a-json.log",
	        "Name": "/no-preload-20220531110349-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531110349-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531110349-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63109090f6b4c35e5687da31ee7ce532cddaf41d21b05a6df8ae11c3486fe9fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531110349-2169",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531110349-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531110349-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531110349-2169",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531110349-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f416093250515b12442e22c72d9a1a37327425dccabe2432ce68e9a32a4bb19",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51693"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51694"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51695"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51696"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51697"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4f4160932505",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531110349-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c49110de8401",
	                        "no-preload-20220531110349-2169"
	                    ],
	                    "NetworkID": "8f956b17300170310409428d6088c5b2b67174350067b9d66aeee84ee79b99e9",
	                    "EndpointID": "0953aff0bbbe269cf4ee651c4b3cfb348a6a18ea2d3f2c2a6e5bd9cd81becebf",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220531110349-2169 logs -n 25
E0531 11:12:01.666566    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220531110349-2169 logs -n 25: (3.05656017s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cilium-20220531104927-2169                     | cilium-20220531104927-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:00 PDT |
	| delete  | -p calico-20220531104927-2169                     | calico-20220531104927-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:00 PDT |
	| start   | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:01 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                        |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p bridge-20220531104925-2169                     | bridge-20220531104925-2169             | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	| start   | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:00 PDT | 31 May 22 11:01 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                        |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:01 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p false-20220531104926-2169                      | false-20220531104926-2169              | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	| start   | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:01 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220531104925-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:02 PDT |
	|         | enable-default-cni-20220531104925-2169            |                                        |         |                |                     |                     |
	| start   | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:02 PDT | 31 May 22 11:03 PDT |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	| ssh     | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531110241-2169    | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                        |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                        |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | logs -n 25                                        |                                        |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:08:24
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:08:24.864423   13098 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:08:24.864582   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864588   13098 out.go:309] Setting ErrFile to fd 2...
	I0531 11:08:24.864592   13098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:08:24.864692   13098 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:08:24.864985   13098 out.go:303] Setting JSON to false
	I0531 11:08:24.879863   13098 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4073,"bootTime":1654016431,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:08:24.879988   13098 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:08:24.902035   13098 out.go:177] * [old-k8s-version-20220531110241-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:08:24.945014   13098 notify.go:193] Checking for updates...
	I0531 11:08:24.966436   13098 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:08:24.987830   13098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:25.009108   13098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:08:25.030802   13098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:08:25.052040   13098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:08:25.074525   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:25.096505   13098 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0531 11:08:25.117749   13098 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:08:25.191594   13098 docker.go:137] docker version: linux-20.10.14
	I0531 11:08:25.191723   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.317616   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.254314323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.361153   13098 out.go:177] * Using the docker driver based on existing profile
	I0531 11:08:25.382343   13098 start.go:284] selected driver: docker
	I0531 11:08:25.382377   13098 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.382520   13098 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:08:25.385966   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:08:25.513518   13098 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:08:25.450743823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:08:25.513684   13098 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:08:25.513704   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:25.513712   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:25.513726   13098 start_flags.go:306] config:
	{Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:25.535504   13098 out.go:177] * Starting control plane node old-k8s-version-20220531110241-2169 in cluster old-k8s-version-20220531110241-2169
	I0531 11:08:25.561264   13098 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:08:25.582190   13098 out.go:177] * Pulling base image ...
	I0531 11:08:25.624028   13098 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:08:25.624035   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:25.624091   13098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 11:08:25.624108   13098 cache.go:57] Caching tarball of preloaded images
	I0531 11:08:25.624296   13098 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:08:25.624329   13098 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 11:08:25.625057   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:25.688021   13098 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:08:25.688038   13098 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:08:25.688049   13098 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:08:25.688095   13098 start.go:352] acquiring machines lock for old-k8s-version-20220531110241-2169: {Name:mkde0b1c8a03f8862b5675925132e687b92ccd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:08:25.688173   13098 start.go:356] acquired machines lock for "old-k8s-version-20220531110241-2169" in 55.993µs
	I0531 11:08:25.688192   13098 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:08:25.688224   13098 fix.go:55] fixHost starting: 
	I0531 11:08:25.688466   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:25.755111   13098 fix.go:103] recreateIfNeeded on old-k8s-version-20220531110241-2169: state=Stopped err=<nil>
	W0531 11:08:25.755155   13098 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:08:25.797678   13098 out.go:177] * Restarting existing docker container for "old-k8s-version-20220531110241-2169" ...
	I0531 11:08:24.128522   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:26.625231   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:25.818473   13098 cli_runner.go:164] Run: docker start old-k8s-version-20220531110241-2169
	I0531 11:08:26.192165   13098 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531110241-2169 --format={{.State.Status}}
	I0531 11:08:26.263698   13098 kic.go:416] container "old-k8s-version-20220531110241-2169" state is running.
	I0531 11:08:26.264351   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.337917   13098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/config.json ...
	I0531 11:08:26.338311   13098 machine.go:88] provisioning docker machine ...
	I0531 11:08:26.338340   13098 ubuntu.go:169] provisioning hostname "old-k8s-version-20220531110241-2169"
	I0531 11:08:26.338453   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.410821   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.411035   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.411048   13098 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220531110241-2169 && echo "old-k8s-version-20220531110241-2169" | sudo tee /etc/hostname
	I0531 11:08:26.530934   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220531110241-2169
	
	I0531 11:08:26.531026   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.602777   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:26.602942   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:26.602957   13098 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220531110241-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220531110241-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220531110241-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:08:26.716578   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:26.716599   13098 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:08:26.716617   13098 ubuntu.go:177] setting up certificates
	I0531 11:08:26.716625   13098 provision.go:83] configureAuth start
	I0531 11:08:26.716695   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:26.787003   13098 provision.go:138] copyHostCerts
	I0531 11:08:26.787080   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:08:26.787096   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:08:26.787190   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:08:26.787413   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:08:26.787423   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:08:26.787482   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:08:26.787625   13098 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:08:26.787631   13098 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:08:26.787687   13098 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:08:26.787803   13098 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220531110241-2169 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220531110241-2169]
	I0531 11:08:26.886368   13098 provision.go:172] copyRemoteCerts
	I0531 11:08:26.886424   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:08:26.886475   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:26.957750   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.039830   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0531 11:08:27.059791   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:08:27.076499   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:08:27.095218   13098 provision.go:86] duration metric: configureAuth took 378.579892ms
	I0531 11:08:27.095231   13098 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:08:27.095385   13098 config.go:178] Loaded profile config "old-k8s-version-20220531110241-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 11:08:27.095451   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.165741   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.165895   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.165906   13098 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:08:27.275339   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:08:27.275354   13098 ubuntu.go:71] root file system type: overlay
	I0531 11:08:27.275532   13098 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:08:27.275598   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.345524   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.345724   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.345774   13098 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:08:27.466818   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:08:27.466905   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.537313   13098 main.go:134] libmachine: Using SSH client type: native
	I0531 11:08:27.537482   13098 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51933 <nil> <nil>}
	I0531 11:08:27.537496   13098 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:08:27.652716   13098 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:08:27.652730   13098 machine.go:91] provisioned docker machine in 1.314427116s
	I0531 11:08:27.652737   13098 start.go:306] post-start starting for "old-k8s-version-20220531110241-2169" (driver="docker")
	I0531 11:08:27.652741   13098 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:08:27.652808   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:08:27.652850   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.722531   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.803808   13098 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:08:27.807457   13098 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:08:27.807489   13098 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:08:27.807499   13098 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:08:27.807506   13098 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:08:27.807514   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:08:27.807618   13098 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:08:27.807774   13098 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:08:27.807937   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:08:27.815028   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:27.832481   13098 start.go:309] post-start completed in 179.738586ms
	I0531 11:08:27.832554   13098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:08:27.832607   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:27.903577   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:27.985899   13098 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:08:27.990822   13098 fix.go:57] fixHost completed within 2.302646254s
	I0531 11:08:27.990835   13098 start.go:81] releasing machines lock for "old-k8s-version-20220531110241-2169", held for 2.30268259s
	I0531 11:08:27.990918   13098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061472   13098 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:08:28.061476   13098 ssh_runner.go:195] Run: systemctl --version
	I0531 11:08:28.061544   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.061541   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.137038   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.138708   13098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/old-k8s-version-20220531110241-2169/id_rsa Username:docker}
	I0531 11:08:28.362084   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:08:28.375292   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.385346   13098 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:08:28.385407   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:08:28.394958   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:08:28.407962   13098 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:08:28.477039   13098 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:08:28.550358   13098 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:08:28.560122   13098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:08:28.629417   13098 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:08:28.639660   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.673402   13098 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:08:28.751209   13098 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0531 11:08:28.751364   13098 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220531110241-2169 dig +short host.docker.internal
	I0531 11:08:28.891410   13098 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:08:28.891541   13098 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:08:28.895978   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:28.906678   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:28.976360   13098 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 11:08:28.976426   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.006401   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.006417   13098 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:08:29.006493   13098 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:08:29.035658   13098 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0531 11:08:29.035672   13098 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:08:29.035742   13098 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:08:29.110219   13098 cni.go:95] Creating CNI manager for ""
	I0531 11:08:29.110231   13098 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:08:29.110243   13098 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:08:29.110256   13098 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220531110241-2169 NodeName:old-k8s-version-20220531110241-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:08:29.110376   13098 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220531110241-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220531110241-2169
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:08:29.110458   13098 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220531110241-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:08:29.110513   13098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0531 11:08:29.118416   13098 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:08:29.118475   13098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:08:29.127166   13098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0531 11:08:29.139824   13098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:08:29.152704   13098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0531 11:08:29.167560   13098 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:08:29.171514   13098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:08:29.180955   13098 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169 for IP: 192.168.49.2
	I0531 11:08:29.181081   13098 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:08:29.181135   13098 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:08:29.181221   13098 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/client.key
	I0531 11:08:29.181289   13098 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key.dd3b5fb2
	I0531 11:08:29.181350   13098 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key
	I0531 11:08:29.181563   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:08:29.181602   13098 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:08:29.181614   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:08:29.181650   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:08:29.181679   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:08:29.181715   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:08:29.181774   13098 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:08:29.182294   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:08:29.204256   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:08:29.222547   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:08:29.240162   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531110241-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:08:29.257426   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:08:29.274651   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:08:29.291982   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:08:29.310334   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:08:29.327611   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:08:29.345041   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:08:29.361815   13098 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:08:29.379584   13098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:08:29.392136   13098 ssh_runner.go:195] Run: openssl version
	I0531 11:08:29.397577   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:08:29.405431   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409147   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.409201   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:08:29.414250   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:08:29.421379   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:08:29.429280   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433082   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.433125   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:08:29.438228   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:08:29.445650   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:08:29.453384   13098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457538   13098 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.457576   13098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:08:29.462718   13098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:08:29.469934   13098 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220531110241-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220531110241-2169 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:08:29.470029   13098 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:29.501701   13098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:08:29.509593   13098 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:08:29.509612   13098 kubeadm.go:626] restartCluster start
	I0531 11:08:29.509663   13098 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:08:29.516645   13098 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.516701   13098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531110241-2169
	I0531 11:08:29.586936   13098 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220531110241-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:08:29.587110   13098 kubeconfig.go:127] "old-k8s-version-20220531110241-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:08:29.587479   13098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:08:29.588780   13098 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:08:29.596409   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.596461   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.604840   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:29.805295   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:29.805488   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:29.816379   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:28.626455   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:31.126569   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:30.005006   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.005125   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.014520   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.204919   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.205019   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.214251   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.406956   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.407122   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.417653   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.604950   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.605035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.614470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:30.805126   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:30.805274   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:30.814510   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.006953   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.007113   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.017745   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.206207   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.206350   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.217593   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.404972   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.405096   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.415473   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.606801   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.606929   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.616800   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:31.805593   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:31.805718   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:31.816339   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.005143   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.005270   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.015802   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.204976   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.205118   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.216470   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.406933   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.407072   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.417683   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.606963   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.607083   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.617829   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.617839   13098 api_server.go:165] Checking apiserver status ...
	I0531 11:08:32.617884   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:08:32.628709   13098 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:08:32.628721   13098 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:08:32.628730   13098 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:08:32.628795   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:08:32.656434   13098 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:08:32.666183   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:08:32.673748   13098 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 May 31 18:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 May 31 18:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5927 May 31 18:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 May 31 18:04 /etc/kubernetes/scheduler.conf
	
	I0531 11:08:32.673812   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:08:32.681500   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:08:32.689012   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:08:32.696145   13098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:08:32.703549   13098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711764   13098 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:08:32.711775   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:32.763732   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.029517   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.265778182s)
	I0531 11:08:34.029540   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.237331   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.291890   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:08:34.348275   13098 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:08:34.348330   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:34.859154   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:33.625705   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:35.626278   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:37.628129   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:35.357249   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:35.859107   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.357652   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:36.859123   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.359109   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:37.859168   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.359070   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:38.857449   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.357079   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:39.858143   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.127313   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:42.627670   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:40.359003   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:40.859087   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.359036   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:41.859047   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.357140   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:42.857133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.357195   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:43.859044   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.357080   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.858240   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:44.627804   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:46.628678   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:45.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:45.857108   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.357039   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:46.858005   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.357517   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:47.856962   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.358073   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:48.857317   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.356887   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.858909   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:49.126737   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:51.628829   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:50.358934   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:50.856994   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.358931   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:51.858801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.356935   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:52.857770   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.357133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:53.858875   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.357428   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.856995   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:54.126925   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:56.628426   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:08:55.357549   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:55.858840   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.356750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:56.858862   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.356865   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:57.858837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.358311   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:58.858798   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.358828   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.858881   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:08:59.129005   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:01.628260   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:00.358750   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:00.858856   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.357557   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:01.858847   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.356665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:02.858662   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.358757   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.857406   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.358099   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:04.856720   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:03.628429   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:06.128506   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:05.358724   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:05.858746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.357258   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:06.856763   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.357893   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:07.858727   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.356831   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.857000   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.358665   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:09.858133   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:08.627169   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:10.650647   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:10.357032   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:10.857957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.356952   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:11.858660   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.357622   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:12.858640   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.356693   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.858667   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.357353   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:14.858510   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:13.127305   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:15.627020   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:15.358636   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:15.856620   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.357157   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:16.857097   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.356528   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:17.856738   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.356746   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.856987   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.358618   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:19.858357   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:18.127594   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:20.628797   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:20.357432   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:20.858551   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.358576   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:21.857145   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.357177   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:22.858306   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.356771   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.857014   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.357042   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:24.856754   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:23.126591   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:25.625261   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:27.627552   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:25.358119   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:25.857621   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.358516   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:26.857398   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.358455   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:27.858462   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.357167   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:28.856990   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.357801   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:29.857261   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.124338   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:32.124447   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:30.357437   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:30.857515   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.358160   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:31.858408   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.358413   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:32.857663   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.357837   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:33.857102   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:34.357463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:34.388265   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.388277   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:34.388334   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:34.417576   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.417588   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:34.417644   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:34.446353   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.446366   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:34.446422   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:34.475446   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.475461   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:34.475516   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:34.505125   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.505137   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:34.505192   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:34.533497   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.533509   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:34.533572   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:34.562509   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.562526   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:34.562590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:34.591764   13098 logs.go:274] 0 containers: []
	W0531 11:09:34.591780   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:34.591788   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:34.591795   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:34.630492   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:34.630506   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:34.642193   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:34.642206   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:34.696106   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:34.696117   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:34.696124   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:34.708414   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:34.708426   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:34.125011   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:36.126579   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:36.762711   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054297817s)
	I0531 11:09:39.263497   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:39.356952   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:39.386768   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.386781   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:39.386842   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:39.417308   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.417321   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:39.417377   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:39.447193   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.447206   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:39.447273   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:39.476858   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.476871   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:39.476925   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:39.505331   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.505343   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:39.505393   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:39.534339   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.534350   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:39.534411   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:39.564150   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.564163   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:39.564226   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:39.593779   13098 logs.go:274] 0 containers: []
	W0531 11:09:39.593792   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:39.593799   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:39.593807   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:39.605961   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:39.605980   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:39.660198   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:39.660212   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:39.660221   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:39.673023   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:39.673035   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:38.625485   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:40.627636   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:41.727761   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054739505s)
	I0531 11:09:41.727870   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:41.727877   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.270600   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:44.357538   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:44.387750   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.387765   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:44.387828   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:44.417243   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.417256   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:44.417316   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:44.446079   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.446093   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:44.446149   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:44.475402   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.475414   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:44.475474   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:44.504617   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.504631   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:44.504699   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:44.534026   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.534043   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:44.534107   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:44.563392   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.563406   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:44.563466   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:44.591445   13098 logs.go:274] 0 containers: []
	W0531 11:09:44.591457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:44.591464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:44.591470   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:44.631333   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:44.631348   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:44.643173   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:44.643186   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:44.696709   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:44.696722   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:44.696730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:44.709853   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:44.709866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:43.125303   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:45.126063   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:47.628073   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:46.763128   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053274008s)
	I0531 11:09:49.263476   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:49.356184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:49.386522   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.386534   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:49.386587   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:49.415937   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.415954   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:49.416011   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:49.444575   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.444586   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:49.444640   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:49.473589   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.473602   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:49.473660   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:49.501607   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.501620   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:49.501680   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:49.530816   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.530829   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:49.530905   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:49.561098   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.561110   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:49.561164   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:49.590698   13098 logs.go:274] 0 containers: []
	W0531 11:09:49.590715   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:49.590723   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:49.590730   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:49.629663   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:49.629677   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:49.641508   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:49.641539   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:49.696749   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:49.696760   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:49.696771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:49.709171   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:49.709184   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:50.125867   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:52.127516   12940 pod_ready.go:102] pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace has status "Ready":"False"
	I0531 11:09:51.764551   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055377989s)
	I0531 11:09:54.264918   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:54.356039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:54.388392   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.388407   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:54.388479   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:54.421365   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.421378   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:54.421433   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:54.455045   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.455057   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:54.455119   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:54.489207   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.489220   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:54.489279   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:54.521630   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.521643   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:54.521702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:54.551997   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.552012   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:54.552089   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:54.585330   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.585343   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:54.585405   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:54.618689   13098 logs.go:274] 0 containers: []
	W0531 11:09:54.618707   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:54.618719   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:54.618731   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:09:53.620597   12940 pod_ready.go:81] duration metric: took 4m0.400347067s waiting for pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace to be "Ready" ...
	E0531 11:09:53.620644   12940 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5j59t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:09:53.620672   12940 pod_ready.go:38] duration metric: took 4m35.821645397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:09:53.620704   12940 kubeadm.go:630] restartCluster took 4m46.551999937s
	W0531 11:09:53.620833   12940 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:09:53.620860   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:09:56.676166   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057446673s)
	I0531 11:09:56.676301   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:56.676310   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:56.717480   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:56.717496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:56.731748   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:56.731762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:56.784506   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:56.784518   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:56.784525   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.299028   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:09:59.356233   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:09:59.387581   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.387594   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:09:59.387648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:09:59.416950   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.416965   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:09:59.417026   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:09:59.445994   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.446006   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:09:59.446066   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:09:59.474706   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.474719   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:09:59.474774   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:09:59.503641   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.503653   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:09:59.503706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:09:59.532168   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.532183   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:09:59.532238   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:09:59.561842   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.561855   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:09:59.561916   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:09:59.590504   13098 logs.go:274] 0 containers: []
	W0531 11:09:59.590516   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:09:59.590522   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:09:59.590529   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:09:59.629633   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:09:59.629647   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:09:59.641945   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:09:59.641959   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:09:59.696474   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:09:59.696490   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:09:59.696496   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:09:59.709878   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:09:59.709892   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:01.764080   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054200644s)
	I0531 11:10:04.265110   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:04.356351   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:04.389088   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.389101   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:04.389161   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:04.418896   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.418909   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:04.418978   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:04.447037   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.447050   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:04.447113   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:04.476510   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.476525   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:04.476584   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:04.504763   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.504776   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:04.504830   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:04.533804   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.533816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:04.533874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:04.563500   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.563513   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:04.563570   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:04.592999   13098 logs.go:274] 0 containers: []
	W0531 11:10:04.593012   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:04.593019   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:04.593025   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:04.631360   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:04.631374   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:04.643433   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:04.643448   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:04.696754   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:04.696772   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:04.696779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:04.708788   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:04.708799   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:06.764822   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056035115s)
	I0531 11:10:09.266997   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:09.356193   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:09.388153   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.388167   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:09.388231   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:09.417585   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.417597   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:09.417653   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:09.449878   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.449891   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:09.449954   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:09.479850   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.479864   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:09.479927   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:09.509485   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.509498   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:09.509561   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:09.540190   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.540204   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:09.540259   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:09.569247   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.569259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:09.569318   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:09.598109   13098 logs.go:274] 0 containers: []
	W0531 11:10:09.598122   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:09.598129   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:09.598136   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:09.638429   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:09.638443   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:09.650114   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:09.650127   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:09.701838   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:09.701849   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:09.701856   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:09.714324   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:09.714337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:11.769141   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054814108s)
	I0531 11:10:14.271474   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:14.357946   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:14.388903   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.388915   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:14.388971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:14.417777   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.417789   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:14.417858   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:14.445824   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.445838   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:14.445899   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:14.475251   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.475263   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:14.475321   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:14.503865   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.503878   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:14.503932   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:14.533523   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.533536   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:14.533594   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:14.562861   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.562874   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:14.562926   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:14.593313   13098 logs.go:274] 0 containers: []
	W0531 11:10:14.593326   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:14.593333   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:14.593340   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:14.647510   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:14.647524   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:14.647531   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:14.659937   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:14.659953   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:16.716744   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056804235s)
	I0531 11:10:16.716857   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:16.716863   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:16.754919   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:16.754931   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.267035   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:19.357894   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:19.391000   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.391013   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:19.391069   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:19.419657   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.419668   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:19.419722   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:19.449464   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.449476   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:19.449530   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:19.479823   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.479837   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:19.479896   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:19.509429   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.509443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:19.509523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:19.538786   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.538798   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:19.538853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:19.568183   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.568199   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:19.568256   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:19.598298   13098 logs.go:274] 0 containers: []
	W0531 11:10:19.598311   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:19.598318   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:19.598325   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:19.610062   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:19.610073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:19.661888   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:19.661899   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:19.661905   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:19.673854   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:19.673866   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:21.733389   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059536851s)
	I0531 11:10:21.733494   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:21.733501   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:24.275115   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:24.356493   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:24.386276   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.386290   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:24.386350   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:24.416711   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.416723   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:24.416776   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:24.448608   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.448620   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:24.448673   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:24.478070   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.478085   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:24.478143   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:24.507952   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.507964   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:24.508019   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:24.536910   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.536923   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:24.536976   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:24.565298   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.565309   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:24.565363   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:24.594397   13098 logs.go:274] 0 containers: []
	W0531 11:10:24.594408   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:24.594415   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:24.594421   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:24.646558   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:24.646575   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:24.646582   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:24.658715   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:24.658729   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:26.714683   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055966036s)
	I0531 11:10:26.714790   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:26.714797   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:26.754170   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:26.754183   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:29.268130   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:29.355669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:29.386195   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.386207   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:29.386267   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:29.415255   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.415269   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:29.415327   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:29.445521   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.445533   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:29.445590   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:29.474576   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.474590   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:29.474648   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:29.503269   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.503283   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:29.503340   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:29.531750   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.531763   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:29.531818   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:29.560522   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.560534   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:29.560588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:29.589986   13098 logs.go:274] 0 containers: []
	W0531 11:10:29.589997   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:29.590004   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:29.590012   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:31.969790   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.349383075s)
	I0531 11:10:31.969847   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:31.979794   12940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:10:31.987417   12940 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:10:31.987474   12940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:10:31.994661   12940 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:10:31.994688   12940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:10:32.492460   12940 out.go:204]   - Generating certificates and keys ...
	I0531 11:10:31.642158   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052159996s)
	I0531 11:10:31.642264   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:31.642271   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:31.680540   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:31.680560   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:31.693978   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:31.693995   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:31.750664   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:31.750676   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:31.750683   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.264743   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:34.355629   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:34.389804   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.389817   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:34.389879   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:34.421065   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.421078   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:34.421133   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:34.450506   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.450525   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:34.450588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:34.480274   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.480286   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:34.480339   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:34.509810   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.509825   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:34.509885   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:34.547728   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.547741   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:34.547797   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:34.577758   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.577770   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:34.577824   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:34.607647   13098 logs.go:274] 0 containers: []
	W0531 11:10:34.607660   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:34.607666   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:34.607673   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:34.646813   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:34.646827   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:34.659116   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:34.659131   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:34.711878   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:34.711895   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:34.711902   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:34.723823   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:34.723835   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:33.618772   12940 out.go:204]   - Booting up control plane ...
	I0531 11:10:36.778445   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054619098s)
	I0531 11:10:39.278826   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:39.355843   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:39.386690   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.386705   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:39.386759   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:39.415159   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.415171   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:39.415229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:39.451994   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.452007   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:39.452062   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:39.480982   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.480996   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:39.481053   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:39.509323   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.509336   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:39.509390   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:39.537420   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.537432   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:39.537489   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:39.565876   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.565889   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:39.565942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:39.596336   13098 logs.go:274] 0 containers: []
	W0531 11:10:39.596347   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:39.596354   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:39.596361   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:39.653266   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:39.653276   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:39.653284   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:39.665996   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:39.666008   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:40.188826   12940 out.go:204]   - Configuring RBAC rules ...
	I0531 11:10:40.564114   12940 cni.go:95] Creating CNI manager for ""
	I0531 11:10:40.564126   12940 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:10:40.564150   12940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:10:40.564229   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:40.564240   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531110349-2169 minikube.k8s.io/updated_at=2022_05_31T11_10_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:40.734108   12940 ops.go:34] apiserver oom_adj: -16
	I0531 11:10:40.734198   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.356952   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.856602   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:42.356807   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:41.723703   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057708131s)
	I0531 11:10:41.723821   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:41.723829   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:41.762214   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:41.762228   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.274433   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:44.355956   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:44.388561   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.388573   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:44.388631   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:44.418528   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.418540   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:44.418596   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:44.448209   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.448228   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:44.448287   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:44.476717   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.476731   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:44.476794   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:44.506060   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.506073   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:44.506127   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:44.535489   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.535502   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:44.535556   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:44.566115   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.566126   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:44.566195   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:44.595347   13098 logs.go:274] 0 containers: []
	W0531 11:10:44.595359   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:44.595366   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:44.595373   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:44.635087   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:44.635104   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:44.648064   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:44.648084   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:44.702705   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:44.702715   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:44.702725   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:44.715262   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:44.715275   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:42.856378   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:43.356331   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:43.856492   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:44.356267   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:44.858359   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:45.356651   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:45.856283   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.357540   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.856218   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:47.357035   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:46.769384   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05412268s)
	I0531 11:10:49.269850   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:49.356095   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:49.389116   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.389130   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:49.389189   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:49.418954   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.418966   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:49.419021   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:49.448672   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.448684   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:49.448748   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:49.477673   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.477685   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:49.477741   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:49.506658   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.506673   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:49.506736   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:49.535844   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.535856   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:49.535912   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:49.564691   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.564704   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:49.564757   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:49.594090   13098 logs.go:274] 0 containers: []
	W0531 11:10:49.594102   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:49.594109   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:49.594116   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:49.634714   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:49.634727   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:49.646653   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:49.646666   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:49.699411   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:49.699421   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:49.699428   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:49.712418   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:49.712430   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:47.857137   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:48.356441   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:48.856134   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:49.356342   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:49.856351   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:50.357506   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:50.856154   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:51.356302   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:51.857335   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:52.356579   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:52.856094   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:53.356131   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:10:53.409894   12940 kubeadm.go:1045] duration metric: took 12.845886553s to wait for elevateKubeSystemPrivileges.
	I0531 11:10:53.409909   12940 kubeadm.go:397] StartCluster complete in 5m46.379280219s
	I0531 11:10:53.409926   12940 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:10:53.410003   12940 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:10:53.410518   12940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:10:53.925611   12940 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531110349-2169" rescaled to 1
	I0531 11:10:53.925646   12940 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:10:53.925699   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:10:53.925711   12940 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:10:53.925889   12940 config.go:178] Loaded profile config "no-preload-20220531110349-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:10:53.948507   12940 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.948520   12940 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.948526   12940 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531110349-2169"
	I0531 11:10:53.948526   12940 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531110349-2169"
	W0531 11:10:53.948534   12940 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:10:53.948425   12940 out.go:177] * Verifying Kubernetes components...
	I0531 11:10:53.948548   12940 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531110349-2169"
	I0531 11:10:53.948538   12940 addons.go:65] Setting dashboard=true in profile "no-preload-20220531110349-2169"
	I0531 11:10:53.989215   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:53.948540   12940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531110349-2169"
	W0531 11:10:53.948564   12940 addons.go:165] addon metrics-server should already be in state true
	I0531 11:10:53.989216   12940 addons.go:153] Setting addon dashboard=true in "no-preload-20220531110349-2169"
	W0531 11:10:53.989293   12940 addons.go:165] addon dashboard should already be in state true
	I0531 11:10:53.948581   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989299   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989322   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:53.989569   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.989706   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.989726   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:53.993447   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:54.029958   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.030047   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:10:54.106735   12940 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531110349-2169"
	I0531 11:10:54.143097   12940 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0531 11:10:54.143122   12940 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:10:54.179057   12940 host.go:66] Checking if "no-preload-20220531110349-2169" exists ...
	I0531 11:10:54.200136   12940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:10:54.237117   12940 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:10:54.200658   12940 cli_runner.go:164] Run: docker container inspect no-preload-20220531110349-2169 --format={{.State.Status}}
	I0531 11:10:54.258122   12940 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:10:54.295196   12940 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:10:54.332187   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:10:54.332258   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:10:51.767720   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055302033s)
	I0531 11:10:54.268584   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:54.355312   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:54.402717   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.402746   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:54.402853   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:54.471995   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.472008   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:54.472076   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:54.519373   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.519388   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:54.519452   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:54.561548   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.561561   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:54.561618   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:10:54.591345   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.591357   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:10:54.591412   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:10:54.640864   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.640879   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:10:54.640945   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:10:54.671790   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.671803   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:10:54.671857   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:10:54.706884   13098 logs.go:274] 0 containers: []
	W0531 11:10:54.706895   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:10:54.706903   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:10:54.706911   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:10:54.332276   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:10:54.332375   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.349325   12940 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531110349-2169" to be "Ready" ...
	I0531 11:10:54.369115   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.369155   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:10:54.369164   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:10:54.369272   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.388239   12940 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:10:54.388268   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:10:54.388356   12940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531110349-2169
	I0531 11:10:54.398369   12940 node_ready.go:49] node "no-preload-20220531110349-2169" has status "Ready":"True"
	I0531 11:10:54.398390   12940 node_ready.go:38] duration metric: took 29.340031ms waiting for node "no-preload-20220531110349-2169" to be "Ready" ...
	I0531 11:10:54.398402   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:10:54.407762   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-kr94r" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:54.485063   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.486446   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.493005   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.496616   12940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51693 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531110349-2169/id_rsa Username:docker}
	I0531 11:10:54.609442   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:10:54.614198   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:10:54.614214   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:10:54.627024   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:10:54.699179   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:10:54.699194   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:10:54.706453   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:10:54.706467   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:10:54.715875   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:10:54.715890   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:10:54.737484   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:10:54.737497   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:10:54.795544   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:10:54.795557   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:10:54.805689   12940 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:10:54.805703   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:10:54.907238   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:10:54.907256   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:10:54.917512   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:10:54.936251   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:10:54.936264   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:10:55.030873   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:10:55.030896   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:10:55.125262   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:10:55.125279   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:10:55.210483   12940 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:10:55.210498   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:10:55.223830   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.19376962s)
	I0531 11:10:55.223855   12940 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:10:55.297323   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:10:55.420835   12940 pod_ready.go:97] error getting pod "coredns-64897985d-kr94r" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kr94r" not found
	I0531 11:10:55.420856   12940 pod_ready.go:81] duration metric: took 1.013083538s waiting for pod "coredns-64897985d-kr94r" in "kube-system" namespace to be "Ready" ...
	E0531 11:10:55.420871   12940 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kr94r" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kr94r" not found
	I0531 11:10:55.420884   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-r9cpx" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:55.444508   12940 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531110349-2169"
	I0531 11:10:56.168362   12940 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:10:56.205410   12940 addons.go:417] enableAddons completed in 2.279750293s
	I0531 11:10:57.431596   12940 pod_ready.go:102] pod "coredns-64897985d-r9cpx" in "kube-system" namespace has status "Ready":"False"
	I0531 11:10:56.760836   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053934934s)
	I0531 11:10:56.760940   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:10:56.760946   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:10:56.799437   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:10:56.799452   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:10:56.813095   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:10:56.813109   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:10:56.865931   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:10:56.865942   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:10:56.865949   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:10:59.378503   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:59.856449   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:10:58.933788   12940 pod_ready.go:92] pod "coredns-64897985d-r9cpx" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.933803   12940 pod_ready.go:81] duration metric: took 3.51295192s waiting for pod "coredns-64897985d-r9cpx" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.933810   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.941613   12940 pod_ready.go:92] pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.941624   12940 pod_ready.go:81] duration metric: took 7.809186ms waiting for pod "etcd-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.941635   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.950180   12940 pod_ready.go:92] pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.950192   12940 pod_ready.go:81] duration metric: took 8.550026ms waiting for pod "kube-apiserver-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.950198   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.956039   12940 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.956051   12940 pod_ready.go:81] duration metric: took 5.847589ms waiting for pod "kube-controller-manager-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.956058   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pcc2" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.962654   12940 pod_ready.go:92] pod "kube-proxy-2pcc2" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:58.962667   12940 pod_ready.go:81] duration metric: took 6.602768ms waiting for pod "kube-proxy-2pcc2" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:58.962673   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:59.329338   12940 pod_ready.go:92] pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:10:59.329348   12940 pod_ready.go:81] duration metric: took 366.673929ms waiting for pod "kube-scheduler-no-preload-20220531110349-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:10:59.329353   12940 pod_ready.go:38] duration metric: took 4.930989595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:10:59.329370   12940 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:10:59.329422   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:10:59.340810   12940 api_server.go:71] duration metric: took 5.415207061s to wait for apiserver process to appear ...
	I0531 11:10:59.340824   12940 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:10:59.340831   12940 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51697/healthz ...
	I0531 11:10:59.346099   12940 api_server.go:266] https://127.0.0.1:51697/healthz returned 200:
	ok
	I0531 11:10:59.347221   12940 api_server.go:140] control plane version: v1.23.6
	I0531 11:10:59.347230   12940 api_server.go:130] duration metric: took 6.40122ms to wait for apiserver health ...
	I0531 11:10:59.347235   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:10:59.532996   12940 system_pods.go:59] 8 kube-system pods found
	I0531 11:10:59.533010   12940 system_pods.go:61] "coredns-64897985d-r9cpx" [fb5cf9cb-7184-4170-934e-1d7cfe1d690e] Running
	I0531 11:10:59.533014   12940 system_pods.go:61] "etcd-no-preload-20220531110349-2169" [ac9f3123-c82c-4739-8910-0d1b91f259b9] Running
	I0531 11:10:59.533019   12940 system_pods.go:61] "kube-apiserver-no-preload-20220531110349-2169" [db4029ec-5ce6-4188-a6b8-048f56eafdaf] Running
	I0531 11:10:59.533023   12940 system_pods.go:61] "kube-controller-manager-no-preload-20220531110349-2169" [2f0ae197-3c23-4ee4-a342-221494979b29] Running
	I0531 11:10:59.533028   12940 system_pods.go:61] "kube-proxy-2pcc2" [b0618709-0f72-4a65-9379-8838a18e826c] Running
	I0531 11:10:59.533033   12940 system_pods.go:61] "kube-scheduler-no-preload-20220531110349-2169" [043aa359-d83f-4904-ad1d-8cc3ce571c62] Running
	I0531 11:10:59.533038   12940 system_pods.go:61] "metrics-server-b955d9d8-xd4wv" [27025e59-a89a-49b0-b7ab-6d9daab5c880] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:10:59.533043   12940 system_pods.go:61] "storage-provisioner" [6f6a080a-3fc2-4d79-b754-6d309648dcd3] Running
	I0531 11:10:59.533047   12940 system_pods.go:74] duration metric: took 185.810453ms to wait for pod list to return data ...
	I0531 11:10:59.533052   12940 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:10:59.729532   12940 default_sa.go:45] found service account: "default"
	I0531 11:10:59.729543   12940 default_sa.go:55] duration metric: took 196.490372ms for default service account to be created ...
	I0531 11:10:59.729549   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:10:59.933433   12940 system_pods.go:86] 8 kube-system pods found
	I0531 11:10:59.933450   12940 system_pods.go:89] "coredns-64897985d-r9cpx" [fb5cf9cb-7184-4170-934e-1d7cfe1d690e] Running
	I0531 11:10:59.933455   12940 system_pods.go:89] "etcd-no-preload-20220531110349-2169" [ac9f3123-c82c-4739-8910-0d1b91f259b9] Running
	I0531 11:10:59.933459   12940 system_pods.go:89] "kube-apiserver-no-preload-20220531110349-2169" [db4029ec-5ce6-4188-a6b8-048f56eafdaf] Running
	I0531 11:10:59.933463   12940 system_pods.go:89] "kube-controller-manager-no-preload-20220531110349-2169" [2f0ae197-3c23-4ee4-a342-221494979b29] Running
	I0531 11:10:59.933466   12940 system_pods.go:89] "kube-proxy-2pcc2" [b0618709-0f72-4a65-9379-8838a18e826c] Running
	I0531 11:10:59.933470   12940 system_pods.go:89] "kube-scheduler-no-preload-20220531110349-2169" [043aa359-d83f-4904-ad1d-8cc3ce571c62] Running
	I0531 11:10:59.933475   12940 system_pods.go:89] "metrics-server-b955d9d8-xd4wv" [27025e59-a89a-49b0-b7ab-6d9daab5c880] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:10:59.933480   12940 system_pods.go:89] "storage-provisioner" [6f6a080a-3fc2-4d79-b754-6d309648dcd3] Running
	I0531 11:10:59.933485   12940 system_pods.go:126] duration metric: took 203.935603ms to wait for k8s-apps to be running ...
	I0531 11:10:59.933490   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:10:59.933540   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:10:59.944445   12940 system_svc.go:56] duration metric: took 10.951036ms WaitForService to wait for kubelet.
	I0531 11:10:59.944462   12940 kubeadm.go:572] duration metric: took 6.018868929s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:10:59.944478   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:11:00.129933   12940 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:11:00.129945   12940 node_conditions.go:123] node cpu capacity is 6
	I0531 11:11:00.129954   12940 node_conditions.go:105] duration metric: took 185.474872ms to run NodePressure ...
	I0531 11:11:00.129963   12940 start.go:213] waiting for startup goroutines ...
	I0531 11:11:00.161044   12940 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:11:00.181322   12940 out.go:177] * Done! kubectl is now configured to use "no-preload-20220531110349-2169" cluster and "default" namespace by default
	I0531 11:10:59.886711   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.886723   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:10:59.886777   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:10:59.917269   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.917283   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:10:59.917349   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:10:59.953208   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.953222   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:10:59.953295   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:10:59.985163   13098 logs.go:274] 0 containers: []
	W0531 11:10:59.985175   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:10:59.985230   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:00.019546   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.019559   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:00.019619   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:00.048681   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.048694   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:00.048750   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:00.080858   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.080875   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:00.080942   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:00.116240   13098 logs.go:274] 0 containers: []
	W0531 11:11:00.116252   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:00.116258   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:00.116267   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:00.129973   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:00.129986   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:00.191716   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:00.191728   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:00.191748   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:00.207100   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:00.207112   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:02.269342   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062241719s)
	I0531 11:11:02.269451   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:02.269458   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:04.814644   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:04.855355   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:04.899388   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.899403   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:04.899460   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:04.931294   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.931308   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:04.931372   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:04.966850   13098 logs.go:274] 0 containers: []
	W0531 11:11:04.966868   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:04.966930   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:05.006753   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.006766   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:05.006825   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:05.035514   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.035528   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:05.035581   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:05.071606   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.071618   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:05.071679   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:05.113543   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.113558   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:05.113622   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:05.158389   13098 logs.go:274] 0 containers: []
	W0531 11:11:05.158403   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:05.158412   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:05.158420   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:05.209536   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:05.209555   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:05.226226   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:05.226244   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:05.293642   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:05.293653   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:05.293661   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:05.314581   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:05.314597   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:07.372712   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058122008s)
	I0531 11:11:09.873521   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:10.356773   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:10.386073   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.386085   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:10.386139   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:10.415320   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.415332   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:10.415399   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:10.444338   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.444352   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:10.444410   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:10.472812   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.472823   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:10.472880   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:10.500902   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.500914   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:10.500971   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:10.530609   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.530621   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:10.530672   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:10.561973   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.561987   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:10.562047   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:10.591600   13098 logs.go:274] 0 containers: []
	W0531 11:11:10.591611   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:10.591618   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:10.591625   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:10.648762   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:10.648773   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:10.648779   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:10.660930   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:10.660942   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:12.715163   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054233595s)
	I0531 11:11:12.715268   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:12.715274   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:12.757025   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:12.757041   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.269700   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:15.355475   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:15.385163   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.385180   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:15.385236   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:15.417139   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.417153   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:15.417210   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:15.447785   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.447798   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:15.447864   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:15.476820   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.476832   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:15.476893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:15.506443   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.506459   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:15.506517   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:15.535403   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.535422   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:15.535490   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:15.563398   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.563411   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:15.563468   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:15.592213   13098 logs.go:274] 0 containers: []
	W0531 11:11:15.592225   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:15.592238   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:15.592245   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:15.631327   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:15.631342   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:15.642726   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:15.642740   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:15.694280   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:15.694292   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:15.694300   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:15.706180   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:15.706192   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:17.759941   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053759496s)
	I0531 11:11:20.260199   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:20.357081   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:20.391524   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.391536   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:20.391588   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:20.420970   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.420982   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:20.421037   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:20.452134   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.452148   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:20.452206   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:20.483165   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.483176   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:20.483217   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:20.512821   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.512834   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:20.512892   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:20.543804   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.543816   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:20.543877   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:20.575838   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.575850   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:20.575908   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:20.607187   13098 logs.go:274] 0 containers: []
	W0531 11:11:20.607200   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:20.607206   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:20.607214   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:20.620268   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:20.620287   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:20.683805   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:20.683818   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:20.683825   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:20.696565   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:20.696583   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:22.757052   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060481864s)
	I0531 11:11:22.757167   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:22.757175   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:25.296888   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:25.356633   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:25.388153   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.388166   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:25.388229   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:25.417984   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.417997   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:25.418052   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:25.447364   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.447376   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:25.447432   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:25.475704   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.475718   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:25.475772   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:25.504817   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.504830   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:25.504882   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:25.534188   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.534200   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:25.534255   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:25.562856   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.562868   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:25.562922   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:25.592490   13098 logs.go:274] 0 containers: []
	W0531 11:11:25.592503   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:25.592509   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:25.592517   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:25.604749   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:25.604762   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:25.657748   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:25.657758   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:25.657765   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:25.669778   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:25.669790   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:27.727458   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057680964s)
	I0531 11:11:27.727570   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:27.727577   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:30.268792   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:30.355702   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:30.385351   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.385362   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:30.385416   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:30.416692   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.416704   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:30.416756   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:30.446080   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.446092   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:30.446148   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:30.475837   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.475850   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:30.475904   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:30.505855   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.505866   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:30.505919   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:30.534660   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.534673   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:30.534735   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:30.563972   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.563985   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:30.564039   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:30.593062   13098 logs.go:274] 0 containers: []
	W0531 11:11:30.593075   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:30.593082   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:30.593089   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:30.604860   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:30.604873   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:30.657067   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:30.657079   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:30.657087   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:30.669385   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:30.669397   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:32.725632   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056248231s)
	I0531 11:11:32.725738   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:32.725745   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:35.265482   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:35.356955   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:35.388680   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.388693   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:35.388746   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:35.418234   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.418247   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:35.418306   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:35.448424   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.448436   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:35.448488   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:35.477114   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.477126   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:35.477183   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:35.507149   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.507160   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:35.507222   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:35.536636   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.536648   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:35.536706   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:35.566077   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.566089   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:35.566147   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:35.596667   13098 logs.go:274] 0 containers: []
	W0531 11:11:35.596680   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:35.596686   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:35.596693   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:37.649220   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052538655s)
	I0531 11:11:37.649329   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:37.649337   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:37.690050   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:37.690063   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:37.701532   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:37.701545   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:37.754370   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:37.754382   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:37.754389   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.266957   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:40.356874   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:40.387551   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.387563   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:40.387617   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:40.416687   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.416699   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:40.416751   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:40.446274   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.446288   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:40.446341   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:40.477123   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.477138   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:40.477196   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:40.507689   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.507702   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:40.507752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:40.538333   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.538346   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:40.538398   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:40.568456   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.568468   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:40.568524   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:40.598870   13098 logs.go:274] 0 containers: []
	W0531 11:11:40.598883   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:40.598891   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:40.598898   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:40.637605   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:40.637623   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:40.650027   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:40.650045   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:40.702714   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:40.702727   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:40.702734   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:40.715145   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:40.715158   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:42.769567   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054421687s)
	I0531 11:11:45.271767   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:45.354742   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:45.384335   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.384348   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:45.384402   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:45.415481   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.415493   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:45.415567   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:45.444878   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.444892   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:45.444964   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:45.474544   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.474557   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:45.474616   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:45.504114   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.504126   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:45.504184   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:45.532825   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.532838   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:45.532893   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:45.561687   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.561699   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:45.561752   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:45.592123   13098 logs.go:274] 0 containers: []
	W0531 11:11:45.592136   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:45.592143   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:45.592149   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:45.631894   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:45.631908   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:45.643759   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:45.643771   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:45.743249   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:45.743266   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:45.743273   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:45.755246   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:45.755258   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:47.813698   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058453882s)
	I0531 11:11:50.316034   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:50.355463   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:50.385123   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.385136   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:50.385190   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:50.414943   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.414957   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:50.415012   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:50.443429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.443441   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:50.443498   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:50.472680   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.472693   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:50.472747   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:50.501429   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.501443   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:50.501501   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:50.531478   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.531489   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:50.531545   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:50.563245   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.563259   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:50.563317   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:50.593840   13098 logs.go:274] 0 containers: []
	W0531 11:11:50.593852   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:50.593858   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:50.593865   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:50.661648   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:50.661658   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:50.661667   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:11:50.673634   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:50.673646   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:52.731947   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058312875s)
	I0531 11:11:52.732053   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:52.732060   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:52.771014   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:52.771030   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:55.283215   13098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:11:55.354561   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:11:55.386892   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.386904   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:11:55.386963   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:11:55.417758   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.417772   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:11:55.417829   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:11:55.448756   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.448769   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:11:55.448826   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:11:55.483671   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.483685   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:11:55.483744   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:11:55.514477   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.514487   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:11:55.514555   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:11:55.544537   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.544548   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:11:55.544607   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:11:55.573745   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.573759   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:11:55.573817   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:11:55.606613   13098 logs.go:274] 0 containers: []
	W0531 11:11:55.606628   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:11:55.606637   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:11:55.606644   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:11:57.665311   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058678422s)
	I0531 11:11:57.665420   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:11:57.665427   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:11:57.704653   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:11:57.704668   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:11:57.718596   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:11:57.718610   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:11:57.773458   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:11:57.773476   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:11:57.773490   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:05:04 UTC, end at Tue 2022-05-31 18:12:01 UTC. --
	May 31 18:10:20 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:20.490848518Z" level=info msg="ignoring event" container=3dc2f2ab920549d3659cd289cc7b2b744cea982221569ae9b2922a0a55a6c231 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:20 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:20.597940997Z" level=info msg="ignoring event" container=58ff821a53d622f757e07fe49f300947225450a8facc7f294d17806076be3822 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.685124723Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=2fba2b7716ebe019445341ca7b305b76cfb828936fabd67fbb3bc70dafa9c890
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.713579858Z" level=info msg="ignoring event" container=2fba2b7716ebe019445341ca7b305b76cfb828936fabd67fbb3bc70dafa9c890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.809281678Z" level=info msg="ignoring event" container=e367253bc9d8450f64e1201b74dcb7fd8bc245ccb1dcc60ef48afb8962e06ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:30 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:30.904398601Z" level=info msg="ignoring event" container=b723cc39a68f2ad735a46c02d42d51c7881cd89ad90614d82456531237ade7ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:31 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:31.002169613Z" level=info msg="ignoring event" container=a47e12341a57060153a2395fdf05b58b95b6984bc76b0e307a1c90335f529d7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:31 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:31.114366560Z" level=info msg="ignoring event" container=ddb9cd08be659b41200e32869d8ddba40e31ddbbaa6b6718a4b5dcc51f98fefa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:53 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:53.942454999Z" level=info msg="ignoring event" container=325a0b8dc79d2f1e8211b46d72f7fe6467fc4b7020e773670ca57a1ed8f2dc49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.539915388Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.540036016Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:56 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:56.540981076Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:10:57 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:57.666351328Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:10:57 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:10:57.968119280Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:11:01 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:01.473186420Z" level=info msg="ignoring event" container=7f16f785e81b1f95fdddb27b7905bf1a7797715467a0e44049a08980b139ddf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:11:01 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:01.504073470Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:11:02 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:02.393454009Z" level=info msg="ignoring event" container=0a40a1e4f58ff48b015ce55776c73610959d80a2431f9616c1bb2bc3171bd464 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.747583874Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.747660580Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:10 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:10.748838030Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:17 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:17.825933841Z" level=info msg="ignoring event" container=f5148b430d0890a100388c0ebd7884924fc7647a7f0ed7dd8f9ac178f95784db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:11:59 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:59.066961047Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:59 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:59.067006791Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:59 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:59.076567957Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:11:59 no-preload-20220531110349-2169 dockerd[129]: time="2022-05-31T18:11:59.718193681Z" level=info msg="ignoring event" container=9b87f5a28789f43638b27081be58ccc5a478a4ed83eeedc7e12c066c3e63bb70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	9b87f5a28789f       a90209bb39e3d                                                                                    2 seconds ago        Exited              dashboard-metrics-scraper   3                   0c59515a76f58
	f36eb7f887ca6       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   54 seconds ago       Running             kubernetes-dashboard        0                   819a50ab0622d
	2b584a0d4078f       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   8dc157329ae9e
	1e71e41ab1319       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   2fead9389c6ab
	020ee73ec6b03       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   8fa06b665c442
	f38e27da8f8f8       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   ae4c2a351166c
	92491c72e5133       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   842d919e4a2cb
	745b99d504e4d       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   fd31a20ac4384
	9a2a1f0c21c52       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   b4a24a517ab62
	
	* 
	* ==> coredns [1e71e41ab131] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531110349-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531110349-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531110349-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_10_40_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:10:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531110349-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:11:55 +0000   Tue, 31 May 2022 18:11:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220531110349-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                d5e74baf-ef0e-467f-9551-8b0c3a613a0f
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-r9cpx                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-no-preload-20220531110349-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-no-preload-20220531110349-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-no-preload-20220531110349-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-2pcc2                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-no-preload-20220531110349-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 metrics-server-b955d9d8-xd4wv                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-2s5p9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-wzj5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 67s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    88s (x4 over 88s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x3 over 88s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  88s (x4 over 88s)  kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  Starting                 82s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                82s                kubelet     Node no-preload-20220531110349-2169 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x2 over 7s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x2 over 7s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x2 over 7s)    kubelet     Node no-preload-20220531110349-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node no-preload-20220531110349-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node no-preload-20220531110349-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [f38e27da8f8f] <==
	* {"level":"info","ts":"2022-05-31T18:10:35.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:10:35.360Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:10:35.361Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:10:35.455Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220531110349-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:10:35.456Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:10:35.456Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:10:35.457Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:10:35.458Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:12:02 up 59 min,  0 users,  load average: 0.86, 0.99, 1.15
	Linux no-preload-20220531110349-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [92491c72e513] <==
	* I0531 18:10:38.929955       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:10:38.955628       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:10:39.002940       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:10:39.006751       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:10:39.007503       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:10:39.010431       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:10:39.795743       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:10:40.398623       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:10:40.405328       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:10:40.412657       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:10:40.577882       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:10:52.930769       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:10:53.482723       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:10:54.426614       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:10:55.440474       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.100.110.30]
	I0531 18:10:56.113798       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.217.54]
	I0531 18:10:56.125077       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.244.19]
	W0531 18:10:56.339122       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:10:56.339262       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:10:56.339302       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:11:56.295540       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:11:56.295639       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:11:56.295645       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [9a2a1f0c21c5] <==
	* E0531 18:10:55.970000       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:55.970298       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:10:55.970333       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.001203       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:10:56.003530       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.003569       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.007428       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.007468       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:10:56.010275       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:10:56.010309       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:10:56.020851       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-wzj5d"
	I0531 18:10:56.027119       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-2s5p9"
	E0531 18:11:54.676630       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0531 18:11:54.676764       1 event.go:294] "Event occurred" object="no-preload-20220531110349-2169" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220531110349-2169 status is now: NodeNotReady"
	I0531 18:11:54.680111       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0531 18:11:54.683136       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0531 18:11:54.686027       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.690833       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-2pcc2" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.695333       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-r9cpx" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.758312       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.763397       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.768382       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:54.773870       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0531 18:11:54.773911       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220531110349-2169" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 18:11:59.775109       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [020ee73ec6b0] <==
	* I0531 18:10:54.240354       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:10:54.240420       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:10:54.240467       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:10:54.422152       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:10:54.422182       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:10:54.422191       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:10:54.422217       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:10:54.423862       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:10:54.424818       1 config.go:317] "Starting service config controller"
	I0531 18:10:54.424843       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:10:54.424868       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:10:54.424873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:10:54.525811       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:10:54.525850       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [745b99d504e4] <==
	* W0531 18:10:37.732605       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:10:37.732621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:10:37.731711       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:37.732747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:37.731700       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:10:37.732759       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:10:37.732786       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:10:37.732959       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:10:37.732818       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:10:37.733023       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:10:37.733496       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:10:37.733526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:10:38.644695       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:38.644732       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:38.651685       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:10:38.651718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:10:38.683758       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:10:38.683794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:10:38.691804       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:10:38.691855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:10:38.725939       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:10:38.725975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:10:38.769317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:10:38.769363       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 18:10:39.127565       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:05:04 UTC, end at Tue 2022-05-31 18:12:03 UTC. --
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305117    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2qt\" (UniqueName: \"kubernetes.io/projected/27025e59-a89a-49b0-b7ab-6d9daab5c880-kube-api-access-4q2qt\") pod \"metrics-server-b955d9d8-xd4wv\" (UID: \"27025e59-a89a-49b0-b7ab-6d9daab5c880\") " pod="kube-system/metrics-server-b955d9d8-xd4wv"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305134    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzb4\" (UniqueName: \"kubernetes.io/projected/1e85297b-8675-49ff-bed8-a051aa621a28-kube-api-access-cgzb4\") pod \"kubernetes-dashboard-8469778f77-wzj5d\" (UID: \"1e85297b-8675-49ff-bed8-a051aa621a28\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305147    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0618709-0f72-4a65-9379-8838a18e826c-xtables-lock\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305167    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnjh\" (UniqueName: \"kubernetes.io/projected/b0618709-0f72-4a65-9379-8838a18e826c-kube-api-access-8mnjh\") pod \"kube-proxy-2pcc2\" (UID: \"b0618709-0f72-4a65-9379-8838a18e826c\") " pod="kube-system/kube-proxy-2pcc2"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305183    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e85297b-8675-49ff-bed8-a051aa621a28-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-wzj5d\" (UID: \"1e85297b-8675-49ff-bed8-a051aa621a28\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-wzj5d"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305196    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf9cb-7184-4170-934e-1d7cfe1d690e-config-volume\") pod \"coredns-64897985d-r9cpx\" (UID: \"fb5cf9cb-7184-4170-934e-1d7cfe1d690e\") " pod="kube-system/coredns-64897985d-r9cpx"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305210    7321 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtsvl\" (UniqueName: \"kubernetes.io/projected/a2d6a943-79eb-4c29-a9d5-6ab70b33fa42-kube-api-access-rtsvl\") pod \"dashboard-metrics-scraper-56974995fc-2s5p9\" (UID: \"a2d6a943-79eb-4c29-a9d5-6ab70b33fa42\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9"
	May 31 18:11:56 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:56.305218    7321 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:57.474147    7321 request.go:665] Waited for 1.174425753s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.550065    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-scheduler-no-preload-20220531110349-2169"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.723189    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-apiserver-no-preload-20220531110349-2169"
	May 31 18:11:57 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:57.936688    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220531110349-2169\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220531110349-2169"
	May 31 18:11:58 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:58.077954    7321 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220531110349-2169\" already exists" pod="kube-system/etcd-no-preload-20220531110349-2169"
	May 31 18:11:59 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:59.077155    7321 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:11:59 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:59.077216    7321 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:11:59 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:59.077330    7321 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4q2qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHan
dler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-xd4wv_kube-system(27025e59-a89a-49b0-b7ab-6d9daab5c880): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 18:11:59 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:11:59.077363    7321 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-xd4wv" podUID=27025e59-a89a-49b0-b7ab-6d9daab5c880
	May 31 18:11:59 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:11:59.578669    7321 scope.go:110] "RemoveContainer" containerID="f5148b430d0890a100388c0ebd7884924fc7647a7f0ed7dd8f9ac178f95784db"
	May 31 18:12:00 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:12:00.319655    7321 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9 through plugin: invalid network status for"
	May 31 18:12:00 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:12:00.323874    7321 scope.go:110] "RemoveContainer" containerID="f5148b430d0890a100388c0ebd7884924fc7647a7f0ed7dd8f9ac178f95784db"
	May 31 18:12:00 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:12:00.324543    7321 scope.go:110] "RemoveContainer" containerID="9b87f5a28789f43638b27081be58ccc5a478a4ed83eeedc7e12c066c3e63bb70"
	May 31 18:12:00 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:12:00.324855    7321 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-2s5p9_kubernetes-dashboard(a2d6a943-79eb-4c29-a9d5-6ab70b33fa42)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9" podUID=a2d6a943-79eb-4c29-a9d5-6ab70b33fa42
	May 31 18:12:01 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:12:01.330398    7321 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9 through plugin: invalid network status for"
	May 31 18:12:01 no-preload-20220531110349-2169 kubelet[7321]: I0531 18:12:01.333322    7321 scope.go:110] "RemoveContainer" containerID="9b87f5a28789f43638b27081be58ccc5a478a4ed83eeedc7e12c066c3e63bb70"
	May 31 18:12:01 no-preload-20220531110349-2169 kubelet[7321]: E0531 18:12:01.333508    7321 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-2s5p9_kubernetes-dashboard(a2d6a943-79eb-4c29-a9d5-6ab70b33fa42)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-2s5p9" podUID=a2d6a943-79eb-4c29-a9d5-6ab70b33fa42
	
	* 
	* ==> kubernetes-dashboard [f36eb7f887ca] <==
	* 2022/05/31 18:11:08 Using namespace: kubernetes-dashboard
	2022/05/31 18:11:08 Using in-cluster config to connect to apiserver
	2022/05/31 18:11:08 Using secret token for csrf signing
	2022/05/31 18:11:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:11:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:11:08 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:11:08 Generating JWE encryption key
	2022/05/31 18:11:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:11:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:11:08 Initializing JWE encryption key from synchronized object
	2022/05/31 18:11:08 Creating in-cluster Sidecar client
	2022/05/31 18:11:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:11:08 Serving insecurely on HTTP port: 9090
	2022/05/31 18:11:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:11:08 Starting overwatch
	
	* 
	* ==> storage-provisioner [2b584a0d4078] <==
	* I0531 18:10:56.301581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:10:56.309072       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:10:56.309101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:10:56.314323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:10:56.314428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff3eac1c-c091-4429-8906-238a6d563305", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851 became leader
	I0531 18:10:56.314455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851!
	I0531 18:10:56.415576       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220531110349-2169_55feb2b2-02d3-40bc-8de3-68e7a342e851!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-xd4wv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv: exit status 1 (285.54304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-xd4wv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531110349-2169 describe pod metrics-server-b955d9d8-xd4wv: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:16:57.109590    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:17:23.688027    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:17:27.913583    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:17:46.114855    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:18:03.056302    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:18:35.736483    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:19:08.518942    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:20:07.533747    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:20:09.424886    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:20:14.684236    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:20:20.962768    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:21:33.975266    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:21:37.729544    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:21:51.479235    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:21:57.114894    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:22:27.917606    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:22:57.028716    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:23:03.056230    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:23:20.238532    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:23:35.732935    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:23:46.306976    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:23:50.967703    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:23:57.920429    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:24:00.027039    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:24:08.515531    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:24:39.834156    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:24:58.784803    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:25:14.680656    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:25:28.427224    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:25:31.571479    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:276: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
start_stop_delete_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (439.498671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:276: status error: exit status 2 (may be ok)
start_stop_delete_test.go:276: "old-k8s-version-20220531110241-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:08:26.190082098Z",
	            "FinishedAt": "2022-05-31T18:08:23.336567271Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "49bd121b76d28de5c01cec5b2b9b781e9e3115310e778c754e0a43752d617ff2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/49bd121b76d2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "4a1e8f65e10d901150ca70abb003401b842c1eb5fb0be5bb24a9c98ec896642f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (439.513985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25: (3.479216224s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:16 PDT | 31 May 22 11:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220531111946-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | disable-driver-mounts-20220531111946-2169         |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:20:52
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:20:52.944881   14088 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:20:52.945084   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945089   14088 out.go:309] Setting ErrFile to fd 2...
	I0531 11:20:52.945093   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945194   14088 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:20:52.945466   14088 out.go:303] Setting JSON to false
	I0531 11:20:52.960339   14088 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4821,"bootTime":1654016431,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:20:52.960440   14088 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:20:52.982638   14088 out.go:177] * [default-k8s-different-port-20220531111947-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:20:53.025482   14088 notify.go:193] Checking for updates...
	I0531 11:20:53.047412   14088 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:20:53.069297   14088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:53.090403   14088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:20:53.112640   14088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:20:53.134605   14088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:20:53.156922   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:53.157647   14088 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:20:53.231919   14088 docker.go:137] docker version: linux-20.10.14
	I0531 11:20:53.232051   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.359110   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.293756437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.402586   14088 out.go:177] * Using the docker driver based on existing profile
	I0531 11:20:53.424356   14088 start.go:284] selected driver: docker
	I0531 11:20:53.424384   14088 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.424528   14088 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:20:53.427949   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.551765   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.48889853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.551941   14088 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:20:53.551960   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:53.551966   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:53.551973   14088 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.574240   14088 out.go:177] * Starting control plane node default-k8s-different-port-20220531111947-2169 in cluster default-k8s-different-port-20220531111947-2169
	I0531 11:20:53.595811   14088 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:20:53.617672   14088 out.go:177] * Pulling base image ...
	I0531 11:20:53.660942   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:53.661017   14088 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:20:53.661021   14088 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:20:53.661045   14088 cache.go:57] Caching tarball of preloaded images
	I0531 11:20:53.661255   14088 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:20:53.661288   14088 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:20:53.662334   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:53.728191   14088 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:20:53.728208   14088 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:20:53.728219   14088 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:20:53.728284   14088 start.go:352] acquiring machines lock for default-k8s-different-port-20220531111947-2169: {Name:mk78e9fe98c6a3e232878ce765bd193e5b506828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:20:53.728368   14088 start.go:356] acquired machines lock for "default-k8s-different-port-20220531111947-2169" in 55.664µs
	I0531 11:20:53.728390   14088 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:20:53.728397   14088 fix.go:55] fixHost starting: 
	I0531 11:20:53.728613   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:53.795533   14088 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531111947-2169: state=Stopped err=<nil>
	W0531 11:20:53.795566   14088 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:20:53.839440   14088 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531111947-2169" ...
	I0531 11:20:53.861504   14088 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.214277   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:54.285676   14088 kic.go:416] container "default-k8s-different-port-20220531111947-2169" state is running.
	I0531 11:20:54.286268   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.359103   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:54.359483   14088 machine.go:88] provisioning docker machine ...
	I0531 11:20:54.359511   14088 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531111947-2169"
	I0531 11:20:54.359571   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.431991   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.432193   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.432206   14088 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531111947-2169 && echo "default-k8s-different-port-20220531111947-2169" | sudo tee /etc/hostname
	I0531 11:20:54.553685   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531111947-2169
	
	I0531 11:20:54.553769   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.625847   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.625998   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.626013   14088 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531111947-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531111947-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531111947-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:20:54.740939   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:54.740960   14088 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:20:54.740983   14088 ubuntu.go:177] setting up certificates
	I0531 11:20:54.740993   14088 provision.go:83] configureAuth start
	I0531 11:20:54.741060   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.813502   14088 provision.go:138] copyHostCerts
	I0531 11:20:54.813586   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:20:54.813597   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:20:54.813681   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:20:54.813909   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:20:54.813929   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:20:54.813988   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:20:54.814120   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:20:54.814127   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:20:54.814187   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:20:54.814303   14088 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531111947-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531111947-2169]
	I0531 11:20:54.984093   14088 provision.go:172] copyRemoteCerts
	I0531 11:20:54.984161   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:20:54.984204   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.054898   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.140792   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:20:55.157975   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 11:20:55.174955   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:20:55.192282   14088 provision.go:86] duration metric: configureAuth took 451.28007ms
	I0531 11:20:55.192295   14088 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:20:55.192463   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:55.192523   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.261854   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.262008   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.262018   14088 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:20:55.374013   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:20:55.374024   14088 ubuntu.go:71] root file system type: overlay
	I0531 11:20:55.374182   14088 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:20:55.374259   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.444497   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.444646   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.444717   14088 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:20:55.566811   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:20:55.566903   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.637162   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.637315   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.637331   14088 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:20:55.756943   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:55.756956   14088 machine.go:91] provisioned docker machine in 1.397481881s
	I0531 11:20:55.756966   14088 start.go:306] post-start starting for "default-k8s-different-port-20220531111947-2169" (driver="docker")
	I0531 11:20:55.756972   14088 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:20:55.757026   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:20:55.757069   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.826937   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.911267   14088 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:20:55.914903   14088 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:20:55.914917   14088 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:20:55.914925   14088 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:20:55.914929   14088 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:20:55.914937   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:20:55.915031   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:20:55.915160   14088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:20:55.915312   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:20:55.922282   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:55.940097   14088 start.go:309] post-start completed in 183.123925ms
	I0531 11:20:55.940169   14088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:20:55.940229   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.011052   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.090867   14088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:20:56.095336   14088 fix.go:57] fixHost completed within 2.36696395s
	I0531 11:20:56.095355   14088 start.go:81] releasing machines lock for "default-k8s-different-port-20220531111947-2169", held for 2.367007248s
	I0531 11:20:56.095435   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165619   14088 ssh_runner.go:195] Run: systemctl --version
	I0531 11:20:56.165622   14088 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:20:56.165682   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165696   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.241266   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.242997   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.324128   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:20:56.452142   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.462221   14088 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:20:56.462271   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:20:56.472988   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:20:56.486894   14088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:20:56.552509   14088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:20:56.624892   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.634564   14088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:20:56.697617   14088 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:20:56.707335   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.742123   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.823696   14088 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:20:56.823896   14088 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220531111947-2169 dig +short host.docker.internal
	I0531 11:20:56.945635   14088 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:20:56.945757   14088 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:20:56.950017   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:56.959956   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.030149   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:57.030228   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.061515   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.061532   14088 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:20:57.061601   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.092363   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.092379   14088 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:20:57.092456   14088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:20:57.165549   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:57.165561   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:57.165578   14088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:20:57.165607   14088 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531111947-2169 NodeName:default-k8s-different-port-20220531111947-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:20:57.165740   14088 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220531111947-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:20:57.165815   14088 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220531111947-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 11:20:57.165869   14088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:20:57.174125   14088 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:20:57.174180   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:20:57.181428   14088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0531 11:20:57.193769   14088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:20:57.206426   14088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0531 11:20:57.218608   14088 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:20:57.222399   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:57.231856   14088 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169 for IP: 192.168.58.2
	I0531 11:20:57.231960   14088 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:20:57.232024   14088 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:20:57.232114   14088 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.key
	I0531 11:20:57.232170   14088 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key.cee25041
	I0531 11:20:57.232221   14088 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key
	I0531 11:20:57.232425   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:20:57.232955   14088 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:20:57.232980   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:20:57.233064   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:20:57.233187   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:20:57.233279   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:20:57.233411   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:57.234195   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:20:57.251232   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:20:57.268133   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:20:57.284962   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:20:57.302399   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:20:57.319341   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:20:57.336647   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:20:57.353997   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:20:57.370819   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:20:57.388311   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:20:57.405153   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:20:57.422579   14088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:20:57.435885   14088 ssh_runner.go:195] Run: openssl version
	I0531 11:20:57.441459   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:20:57.449337   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453299   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453340   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.458858   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:20:57.467841   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:20:57.476418   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480355   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480411   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.485744   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:20:57.493027   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:20:57.500863   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.504963   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.505012   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.510223   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:20:57.517409   14088 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:57.517502   14088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:20:57.546434   14088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:20:57.554470   14088 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:20:57.554485   14088 kubeadm.go:626] restartCluster start
	I0531 11:20:57.554529   14088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:20:57.561236   14088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.561291   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.632105   14088 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531111947-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:57.632282   14088 kubeconfig.go:127] "default-k8s-different-port-20220531111947-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:20:57.632611   14088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:20:57.633766   14088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:20:57.641261   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.641314   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.649587   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.851170   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.851374   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.861667   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.049817   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.049887   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.059064   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.250141   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.250267   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.260725   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.449710   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.449793   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.459510   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.651089   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.651214   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.661996   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.851729   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.851819   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.861332   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.051735   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.051889   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.063335   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.250489   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.250612   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.261366   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.451746   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.451884   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.461795   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.651683   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.651840   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.662763   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.851738   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.851863   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.862352   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.051822   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.051919   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.060856   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.251144   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.251295   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.262356   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.450234   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.450389   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.460742   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.650557   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.650686   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.661258   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.661270   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.661321   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.670223   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.670238   14088 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:21:00.670248   14088 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:21:00.670310   14088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:21:00.699869   14088 docker.go:442] Stopping containers: [b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f]
	I0531 11:21:00.699944   14088 ssh_runner.go:195] Run: docker stop b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f
	I0531 11:21:00.731543   14088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:21:00.744008   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:21:00.751326   14088 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 18:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:20 /etc/kubernetes/scheduler.conf
	
	I0531 11:21:00.751370   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 11:21:00.758546   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 11:21:00.765681   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.772792   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.772846   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.779694   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 11:21:00.786641   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.786689   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:21:00.793632   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.800995   14088 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.801006   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:00.845457   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.402938   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.519525   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.564444   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.614093   14088 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:21:01.614155   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.126101   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.624157   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.670383   14088 api_server.go:71] duration metric: took 1.056305017s to wait for apiserver process to appear ...
	I0531 11:21:02.670406   14088 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:21:02.670419   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:02.671578   14088 api_server.go:256] stopped: https://127.0.0.1:53880/healthz: Get "https://127.0.0.1:53880/healthz": EOF
	I0531 11:21:03.172114   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.213534   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:21:05.213551   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:21:05.671899   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.679000   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:05.679016   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.171614   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.177304   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:06.177318   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.671709   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.677584   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 200:
	ok
	I0531 11:21:06.684407   14088 api_server.go:140] control plane version: v1.23.6
	I0531 11:21:06.684421   14088 api_server.go:130] duration metric: took 4.014058772s to wait for apiserver health ...
	I0531 11:21:06.684426   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:21:06.684430   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:21:06.684440   14088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:21:06.692117   14088 system_pods.go:59] 8 kube-system pods found
	I0531 11:21:06.692131   14088 system_pods.go:61] "coredns-64897985d-hw9jj" [a99971df-076d-4aba-a217-a2a75c87a745] Running
	I0531 11:21:06.692141   14088 system_pods.go:61] "etcd-default-k8s-different-port-20220531111947-2169" [297b8c39-20c3-4101-878e-1fab3854f875] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 11:21:06.692146   14088 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [d3af2377-33bb-4d77-873c-bf4d620b1ccc] Running
	I0531 11:21:06.692152   14088 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [3f64c0cd-80e0-4f01-b61c-62d6914342cc] Running
	I0531 11:21:06.692156   14088 system_pods.go:61] "kube-proxy-4ljp8" [b5ef4698-6857-48cc-828a-26043bc6f05f] Running
	I0531 11:21:06.692159   14088 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [8efaf333-e7f0-4eb4-ace3-68210d3b9d66] Running
	I0531 11:21:06.692166   14088 system_pods.go:61] "metrics-server-b955d9d8-dj4pb" [837a7b7e-0528-4b97-af67-3dab5106f2a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:21:06.692172   14088 system_pods.go:61] "storage-provisioner" [45148f19-69b5-4e40-a3e5-284bafef13b2] Running
	I0531 11:21:06.692175   14088 system_pods.go:74] duration metric: took 7.732726ms to wait for pod list to return data ...
	I0531 11:21:06.692181   14088 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:21:06.695801   14088 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:21:06.695817   14088 node_conditions.go:123] node cpu capacity is 6
	I0531 11:21:06.695829   14088 node_conditions.go:105] duration metric: took 3.644789ms to run NodePressure ...
	I0531 11:21:06.695842   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:06.831589   14088 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835556   14088 kubeadm.go:777] kubelet initialised
	I0531 11:21:06.835567   14088 kubeadm.go:778] duration metric: took 3.96448ms waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835577   14088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:21:06.841061   14088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857619   14088 pod_ready.go:92] pod "coredns-64897985d-hw9jj" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:06.857637   14088 pod_ready.go:81] duration metric: took 16.555578ms waiting for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857647   14088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:08.871386   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:11.369552   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:13.868531   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:14.869966   14088 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:14.869979   14088 pod_ready.go:81] duration metric: took 8.012423735s waiting for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:14.869987   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:16.883059   14088 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:18.883566   14088 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.883577   14088 pod_ready.go:81] duration metric: took 4.013634265s waiting for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.883584   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888077   14088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.888084   14088 pod_ready.go:81] duration metric: took 4.485217ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888090   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892647   14088 pod_ready.go:92] pod "kube-proxy-4ljp8" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.892655   14088 pod_ready.go:81] duration metric: took 4.561071ms waiting for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892661   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896699   14088 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.896707   14088 pod_ready.go:81] duration metric: took 4.041445ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896713   14088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:20.908715   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:23.409540   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:25.909327   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:28.408295   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:30.409483   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:32.411745   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:34.911779   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:37.408629   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:39.412518   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:41.908120   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:43.908385   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:45.910144   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:48.411558   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:50.907773   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:52.911197   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:55.409130   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:57.909519   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:00.411176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:02.907272   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:04.911276   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:07.408653   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:09.909908   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:12.408769   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:14.410410   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:16.910112   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:19.408777   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:21.410381   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:23.410768   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:25.908736   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:27.910873   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:30.410574   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:32.410928   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:34.907313   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:36.907978   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:39.410775   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:41.908034   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:43.909646   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:46.409183   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:48.909862   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:51.410737   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:53.908132   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:55.909762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:57.909964   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:00.408361   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:02.907705   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:04.908465   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:07.407592   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:09.410553   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:11.907998   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:13.910191   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:16.407762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:18.408926   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:20.907622   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:23.410482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:25.907564   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:27.910471   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:30.409056   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:32.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:34.908332   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:37.407176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:39.409174   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:41.907391   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:44.408139   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:46.906854   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:48.908411   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:51.408059   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:53.907472   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:56.407152   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:58.908572   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:01.407858   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:03.409607   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:05.908557   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:08.408406   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:10.410209   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:12.909722   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:15.407046   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:17.908838   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:20.406716   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:22.407482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:24.408743   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:26.907123   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:28.908785   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:31.406500   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:33.407133   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:35.407499   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:37.409435   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:39.907341   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:42.407598   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:44.408678   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:46.408812   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:48.906968   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:50.907917   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:52.909090   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:55.406872   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:57.407488   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:59.906302   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:01.907576   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:04.408849   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:06.907815   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:09.413253   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:11.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:14.407071   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:16.906996   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:18.900324   14088 pod_ready.go:81] duration metric: took 4m0.006483722s waiting for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	E0531 11:25:18.900352   14088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:25:18.900380   14088 pod_ready.go:38] duration metric: took 4m12.067852815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:25:18.900441   14088 kubeadm.go:630] restartCluster took 4m21.349122361s
	W0531 11:25:18.900579   14088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:25:18.900609   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:25:57.271214   14088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.371057145s)
	I0531 11:25:57.271273   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:25:57.280781   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:25:57.288357   14088 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:25:57.288400   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:25:57.295934   14088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:25:57.295968   14088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:25:57.778963   14088 out.go:204]   - Generating certificates and keys ...
	I0531 11:25:58.867689   14088 out.go:204]   - Booting up control plane ...
	I0531 11:26:05.418464   14088 out.go:204]   - Configuring RBAC rules ...
	I0531 11:26:05.792173   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:26:05.792184   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:26:05.792207   14088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:26:05.792275   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531111947-2169 minikube.k8s.io/updated_at=2022_05_31T11_26_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.792280   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.805712   14088 ops.go:34] apiserver oom_adj: -16
	I0531 11:26:05.878613   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:06.556979   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.056117   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.557355   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:26:10 UTC. --
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Starting Docker Application Container Engine...
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.442700177Z" level=info msg="Starting up"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445540309Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445580709Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445602670Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445613401Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447324824Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447356391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447369067Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447375179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.454861167Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.459158936Z" level=info msg="Loading containers: start."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.541211721Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.574193816Z" level=info msg="Loading containers: done."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582853381Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582916167Z" level=info msg="Daemon has completed initialization"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.603971346Z" level=info msg="API listen on [::]:2376"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.609838771Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-05-31T18:26:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:26:12 up  1:14,  0 users,  load average: 0.63, 0.75, 0.99
	Linux old-k8s-version-20220531110241-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:26:12 UTC. --
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: I0531 18:26:11.771213   24387 server.go:410] Version: v1.16.0
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: I0531 18:26:11.771501   24387 plugins.go:100] No cloud provider specified.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: I0531 18:26:11.771518   24387 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: I0531 18:26:11.773303   24387 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: W0531 18:26:11.773935   24387 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: W0531 18:26:11.773998   24387 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:26:11 old-k8s-version-20220531110241-2169 kubelet[24387]: F0531 18:26:11.774023   24387 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:26:11 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: I0531 18:26:12.527358   24412 server.go:410] Version: v1.16.0
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: I0531 18:26:12.527632   24412 plugins.go:100] No cloud provider specified.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: I0531 18:26:12.527664   24412 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: I0531 18:26:12.529451   24412 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: W0531 18:26:12.530894   24412 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: W0531 18:26:12.531001   24412 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:26:12 old-k8s-version-20220531110241-2169 kubelet[24412]: F0531 18:26:12.531057   24412 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:26:12 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:26:12 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:26:12.554287   14206 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (441.209036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220531110241-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220531111208-2169 --alsologtostderr -v=1
E0531 11:19:00.030863    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169: exit status 2 (16.099846542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169: exit status 2 (16.103404451s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220531111208-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531111208-2169
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531111208-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60",
	        "Created": "2022-05-31T18:12:15.374174807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233057,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:13:12.216616031Z",
	            "FinishedAt": "2022-05-31T18:13:10.24521198Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/hosts",
	        "LogPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60-json.log",
	        "Name": "/embed-certs-20220531111208-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531111208-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531111208-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531111208-2169",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531111208-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531111208-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531111208-2169",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531111208-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d414d9fd0d749495cd4dcb4533150b9eff2e751eaf2b1121783a01bc2ac067c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52735"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52736"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52737"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2d414d9fd0d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531111208-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "002d31e6b083",
	                        "embed-certs-20220531111208-2169"
	                    ],
	                    "NetworkID": "c80f14b31c6469883124681d83b6953096f1892ca6f339d77c90232b70b0ad33",
	                    "EndpointID": "934ee3802ba392939a9dc393a27a439b54fcc4961d5924a2e21b0b30cf534b37",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220531111208-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220531111208-2169 logs -n 25: (2.822367448s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |               Profile               |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	|         | pgrep -a kubelet                                  |                                     |         |                |                     |                     |
	| delete  | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	| start   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --memory=2200                                     |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --memory=2200                                     |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                     |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:16 PDT | 31 May 22 11:16 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                     |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:13:10
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:13:10.912075   13553 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:13:10.912340   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912345   13553 out.go:309] Setting ErrFile to fd 2...
	I0531 11:13:10.912349   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912452   13553 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:13:10.912710   13553 out.go:303] Setting JSON to false
	I0531 11:13:10.927550   13553 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4359,"bootTime":1654016431,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:13:10.927657   13553 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:13:10.950011   13553 out.go:177] * [embed-certs-20220531111208-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:13:10.992542   13553 notify.go:193] Checking for updates...
	I0531 11:13:11.014435   13553 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:13:11.057209   13553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:11.078751   13553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:13:11.100576   13553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:13:11.122489   13553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:13:11.145156   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:11.145842   13553 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:13:11.217087   13553 docker.go:137] docker version: linux-20.10.14
	I0531 11:13:11.217221   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.343566   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.291646587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.387031   13553 out.go:177] * Using the docker driver based on existing profile
	I0531 11:13:11.408143   13553 start.go:284] selected driver: docker
	I0531 11:13:11.408166   13553 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.408292   13553 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:13:11.410542   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.535319   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.48504376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.535472   13553 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:13:11.535492   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:11.535500   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:11.535513   13553 start_flags.go:306] config:
	{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.579012   13553 out.go:177] * Starting control plane node embed-certs-20220531111208-2169 in cluster embed-certs-20220531111208-2169
	I0531 11:13:11.600345   13553 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:13:11.622204   13553 out.go:177] * Pulling base image ...
	I0531 11:13:11.664279   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:11.664367   13553 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:13:11.664355   13553 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:13:11.664398   13553 cache.go:57] Caching tarball of preloaded images
	I0531 11:13:11.664595   13553 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:13:11.664618   13553 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:13:11.665489   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:11.728631   13553 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:13:11.728650   13553 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:13:11.728661   13553 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:13:11.728716   13553 start.go:352] acquiring machines lock for embed-certs-20220531111208-2169: {Name:mk6b884d6089a1578cdaf488d7f8fffed1b73a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:13:11.728792   13553 start.go:356] acquired machines lock for "embed-certs-20220531111208-2169" in 57.599µs
	I0531 11:13:11.728839   13553 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:13:11.728846   13553 fix.go:55] fixHost starting: 
	I0531 11:13:11.729063   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:11.794705   13553 fix.go:103] recreateIfNeeded on embed-certs-20220531111208-2169: state=Stopped err=<nil>
	W0531 11:13:11.794739   13553 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:13:11.838307   13553 out.go:177] * Restarting existing docker container for "embed-certs-20220531111208-2169" ...
	I0531 11:13:11.859598   13553 cli_runner.go:164] Run: docker start embed-certs-20220531111208-2169
	I0531 11:13:12.207124   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:12.278559   13553 kic.go:416] container "embed-certs-20220531111208-2169" state is running.
	I0531 11:13:12.279154   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.351999   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:12.352414   13553 machine.go:88] provisioning docker machine ...
	I0531 11:13:12.352438   13553 ubuntu.go:169] provisioning hostname "embed-certs-20220531111208-2169"
	I0531 11:13:12.352499   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.426073   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.426254   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.426271   13553 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531111208-2169 && echo "embed-certs-20220531111208-2169" | sudo tee /etc/hostname
	I0531 11:13:12.546985   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531111208-2169
	
	I0531 11:13:12.547055   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.667019   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.667153   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.667167   13553 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531111208-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531111208-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531111208-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:13:12.778841   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:12.778871   13553 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:13:12.778892   13553 ubuntu.go:177] setting up certificates
	I0531 11:13:12.778902   13553 provision.go:83] configureAuth start
	I0531 11:13:12.778963   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.851177   13553 provision.go:138] copyHostCerts
	I0531 11:13:12.851272   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:13:12.851284   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:13:12.851409   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:13:12.851635   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:13:12.851644   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:13:12.851702   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:13:12.851836   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:13:12.851845   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:13:12.851899   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:13:12.852005   13553 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531111208-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531111208-2169]
	I0531 11:13:13.012300   13553 provision.go:172] copyRemoteCerts
	I0531 11:13:13.012367   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:13:13.012411   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.083950   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.163687   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:13:13.181984   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:13:13.202769   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:13:13.220771   13553 provision.go:86] duration metric: configureAuth took 441.859262ms
	I0531 11:13:13.220785   13553 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:13:13.220931   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:13.220996   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.290761   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.290928   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.290938   13553 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:13:13.403887   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:13:13.403899   13553 ubuntu.go:71] root file system type: overlay
	I0531 11:13:13.404028   13553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:13:13.404100   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.473905   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.474051   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.474101   13553 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:13:13.592185   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:13:13.592261   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.662203   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.662343   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.662357   13553 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:13:13.777966   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:13.777983   13553 machine.go:91] provisioned docker machine in 1.425577512s
	I0531 11:13:13.777991   13553 start.go:306] post-start starting for "embed-certs-20220531111208-2169" (driver="docker")
	I0531 11:13:13.777998   13553 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:13:13.778067   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:13:13.778116   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.848237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.932021   13553 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:13:13.935470   13553 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:13:13.935482   13553 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:13:13.935489   13553 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:13:13.935497   13553 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:13:13.935504   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:13:13.935616   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:13:13.935749   13553 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:13:13.935898   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:13:13.942941   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:13.960034   13553 start.go:309] post-start completed in 182.035145ms
	I0531 11:13:13.960102   13553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:13:13.960153   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.029714   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.110010   13553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:13:14.114571   13553 fix.go:57] fixHost completed within 2.385751879s
	I0531 11:13:14.114581   13553 start.go:81] releasing machines lock for "embed-certs-20220531111208-2169", held for 2.385811827s
	I0531 11:13:14.114647   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:14.183914   13553 ssh_runner.go:195] Run: systemctl --version
	I0531 11:13:14.183932   13553 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:13:14.183988   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.183999   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.259237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.261186   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.338523   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:13:14.475654   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.485255   13553 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:13:14.485320   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:13:14.495800   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:13:14.508692   13553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:13:14.578970   13553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:13:14.646485   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.656123   13553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:13:14.719480   13553 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:13:14.729422   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.764747   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.842937   13553 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:13:14.843097   13553 cli_runner.go:164] Run: docker exec -t embed-certs-20220531111208-2169 dig +short host.docker.internal
	I0531 11:13:14.980735   13553 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:13:14.980851   13553 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:13:14.985209   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:14.995189   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.066041   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:15.066120   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.099230   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.099246   13553 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:13:15.099322   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.128293   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.128309   13553 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:13:15.128404   13553 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:13:15.201388   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:15.201399   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:15.201412   13553 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:13:15.201426   13553 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531111208-2169 NodeName:embed-certs-20220531111208-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:13:15.201536   13553 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220531111208-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:13:15.201613   13553 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220531111208-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:13:15.201672   13553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:13:15.209154   13553 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:13:15.209203   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:13:15.216165   13553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0531 11:13:15.228487   13553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:13:15.241530   13553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0531 11:13:15.253811   13553 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:13:15.257550   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:15.266790   13553 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169 for IP: 192.168.58.2
	I0531 11:13:15.266894   13553 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:13:15.266943   13553 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:13:15.267029   13553 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/client.key
	I0531 11:13:15.267089   13553 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key.cee25041
	I0531 11:13:15.267135   13553 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key
	I0531 11:13:15.267327   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:13:15.267368   13553 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:13:15.267379   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:13:15.267410   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:13:15.267442   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:13:15.267475   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:13:15.267531   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:15.268077   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:13:15.286065   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:13:15.303481   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:13:15.320612   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:13:15.338097   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:13:15.354546   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:13:15.370990   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:13:15.387662   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:13:15.404111   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:13:15.420738   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:13:15.437866   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:13:15.454492   13553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:13:15.467247   13553 ssh_runner.go:195] Run: openssl version
	I0531 11:13:15.472671   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:13:15.480357   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484359   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484403   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.489653   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:13:15.496718   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:13:15.504292   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508441   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508479   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.513962   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:13:15.521223   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:13:15.529012   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533202   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533243   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.538555   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:13:15.545740   13553 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:15.545831   13553 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:15.574436   13553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:13:15.582575   13553 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:13:15.582590   13553 kubeadm.go:626] restartCluster start
	I0531 11:13:15.582637   13553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:13:15.589452   13553 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.589508   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.658995   13553 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531111208-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:15.659167   13553 kubeconfig.go:127] "embed-certs-20220531111208-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:13:15.659511   13553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:13:15.660882   13553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:13:15.668383   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.668428   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.676694   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.878830   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.879010   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.890181   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.078851   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.079013   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.089798   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.277533   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.277607   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.287591   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.478834   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.478983   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.490523   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.678854   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.679031   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.689622   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.878660   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.878750   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.890310   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.078866   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.079037   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.089324   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.278043   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.278132   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.287546   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.478843   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.478972   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.489976   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.677174   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.677289   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.687745   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.878762   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.878861   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.889375   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.076807   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.076876   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.085455   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.278341   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.278493   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.289377   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.476936   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.477072   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.486862   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.677619   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.677776   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.688531   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.688541   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.688589   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.696891   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.696907   13553 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:13:18.696918   13553 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:13:18.696972   13553 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:18.727258   13553 docker.go:442] Stopping containers: [a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56]
	I0531 11:13:18.727328   13553 ssh_runner.go:195] Run: docker stop a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56
	I0531 11:13:18.758625   13553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:13:18.769960   13553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:13:18.778599   13553 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 18:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 18:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:12 /etc/kubernetes/scheduler.conf
	
	I0531 11:13:18.778676   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:13:18.786996   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:13:18.795010   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.802414   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.802469   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.810402   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:13:18.818706   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.818775   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:13:18.825849   13553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833007   13553 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833017   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:18.877378   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:19.935016   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057629084s)
	I0531 11:13:19.935035   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.058140   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.103466   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.152115   13553 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:13:20.152176   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:20.663756   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.164475   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.215674   13553 api_server.go:71] duration metric: took 1.063576049s to wait for apiserver process to appear ...
	I0531 11:13:21.215692   13553 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:13:21.215704   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:21.216920   13553 api_server.go:256] stopped: https://127.0.0.1:52733/healthz: Get "https://127.0.0.1:52733/healthz": EOF
	I0531 11:13:21.718992   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.167314   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:13:24.167334   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:13:24.217141   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.222557   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.222574   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:24.719071   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.726442   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.726459   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.216999   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.222726   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:25.222741   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.717101   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.724848   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 200:
	ok
	I0531 11:13:25.732599   13553 api_server.go:140] control plane version: v1.23.6
	I0531 11:13:25.732611   13553 api_server.go:130] duration metric: took 4.516969769s to wait for apiserver health ...
	I0531 11:13:25.732616   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:25.732621   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:25.732632   13553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:13:25.741842   13553 system_pods.go:59] 8 kube-system pods found
	I0531 11:13:25.741859   13553 system_pods.go:61] "coredns-64897985d-45rxk" [1d1af550-c7eb-4d3d-a99e-ea74b583e84d] Running
	I0531 11:13:25.741863   13553 system_pods.go:61] "etcd-embed-certs-20220531111208-2169" [8b0ce277-ff5a-4e5b-b019-42c569689abb] Running
	I0531 11:13:25.741867   13553 system_pods.go:61] "kube-apiserver-embed-certs-20220531111208-2169" [b2087c02-761e-4919-8b92-9c3ae53f2821] Running
	I0531 11:13:25.741876   13553 system_pods.go:61] "kube-controller-manager-embed-certs-20220531111208-2169" [a56fc9fd-2eee-4f73-904d-0de881e33d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:13:25.741881   13553 system_pods.go:61] "kube-proxy-lgwn5" [9aad1763-1139-4bed-8c7d-a956e68d3386] Running
	I0531 11:13:25.741885   13553 system_pods.go:61] "kube-scheduler-embed-certs-20220531111208-2169" [9297a013-1420-42ab-8c26-7352aca786b3] Running
	I0531 11:13:25.741890   13553 system_pods.go:61] "metrics-server-b955d9d8-jbxp2" [ad7ca455-4720-4932-95d3-703a51595cb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:13:25.741895   13553 system_pods.go:61] "storage-provisioner" [d7df490e-a02b-4db2-912b-0d64caf0924b] Running
	I0531 11:13:25.741900   13553 system_pods.go:74] duration metric: took 9.263068ms to wait for pod list to return data ...
	I0531 11:13:25.741905   13553 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:13:25.745283   13553 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:13:25.745301   13553 node_conditions.go:123] node cpu capacity is 6
	I0531 11:13:25.745322   13553 node_conditions.go:105] duration metric: took 3.412768ms to run NodePressure ...
	I0531 11:13:25.745359   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:26.023161   13553 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027527   13553 kubeadm.go:777] kubelet initialised
	I0531 11:13:26.027540   13553 kubeadm.go:778] duration metric: took 4.364923ms waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027549   13553 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:13:26.034285   13553 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086851   13553 pod_ready.go:92] pod "coredns-64897985d-45rxk" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.086875   13553 pod_ready.go:81] duration metric: took 52.574215ms waiting for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086892   13553 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093072   13553 pod_ready.go:92] pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.093082   13553 pod_ready.go:81] duration metric: took 6.180628ms waiting for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093089   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099122   13553 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.099133   13553 pod_ready.go:81] duration metric: took 6.039477ms waiting for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099139   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:28.146822   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:30.645890   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:33.144139   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:34.643120   13553 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.643133   13553 pod_ready.go:81] duration metric: took 8.544092302s waiting for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.643140   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647169   13553 pod_ready.go:92] pod "kube-proxy-lgwn5" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.647176   13553 pod_ready.go:81] duration metric: took 4.0327ms waiting for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647182   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:36.657938   13553 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:37.157814   13553 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:37.157828   13553 pod_ready.go:81] duration metric: took 2.510670323s waiting for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:37.157835   13553 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:39.168841   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:41.170734   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:43.669021   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:45.669098   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:47.671012   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:50.168445   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:52.170563   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:54.668999   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:57.170207   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:59.170988   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:01.670072   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:03.670082   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:06.167570   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:08.167638   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:10.169806   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:12.670944   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:15.169753   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:17.667759   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:19.670165   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:21.670624   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:24.168819   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:26.669956   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:28.670940   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	W0531 11:14:33.760041   13098 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 11:14:33.760073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:14:34.182940   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:14:34.192616   13098 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:14:34.192666   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:14:34.200294   13098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:14:34.200312   13098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:14:31.169348   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:33.668612   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:34.901603   13098 out.go:204]   - Generating certificates and keys ...
	I0531 11:14:36.104005   13098 out.go:204]   - Booting up control plane ...
	I0531 11:14:36.168890   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:38.669937   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:41.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:43.168404   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:45.668439   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:47.670324   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:50.169250   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:52.666875   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:54.666872   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:56.667829   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:58.668136   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:00.668681   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:03.166600   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:05.168912   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:07.668122   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:09.669752   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:12.167201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:14.169370   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:16.665960   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:18.669950   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:21.166794   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:23.168174   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:25.666653   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:27.669565   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:30.166540   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:32.168336   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:34.666131   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:36.668357   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:38.669065   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:41.168929   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:43.667201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:45.667894   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:48.166425   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:50.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:52.666884   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:54.667615   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:56.667805   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:59.169231   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:01.665509   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:03.669718   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:06.165318   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:08.166716   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:10.167687   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:12.668552   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:15.166558   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:17.167341   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:19.665622   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:21.667455   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:24.169405   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:26.667161   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:29.166126   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:31.020061   13098 kubeadm.go:397] StartCluster complete in 8m1.555975545s
	I0531 11:16:31.020140   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:16:31.050974   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.050987   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:16:31.051042   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:16:31.080367   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.080379   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:16:31.080436   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:16:31.109454   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.109467   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:16:31.109523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:16:31.138029   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.138040   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:16:31.138093   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:16:31.168696   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.168708   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:16:31.168763   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:16:31.198083   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.198100   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:16:31.198162   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:16:31.226599   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.226611   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:16:31.226669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:16:31.256444   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.256457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:16:31.256464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:16:31.256471   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:16:31.295837   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:16:31.295851   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:16:31.307624   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:16:31.307639   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:16:31.359917   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:16:31.359927   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:16:31.359936   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:16:31.372199   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:16:31.372211   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:16:33.427067   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054868747s)
	W0531 11:16:33.427193   13098 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 11:16:33.427208   13098 out.go:239] * 
	W0531 11:16:33.427350   13098 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.427367   13098 out.go:239] * 
	W0531 11:16:33.427900   13098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 11:16:33.489529   13098 out.go:177] 
	W0531 11:16:33.531716   13098 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.531846   13098 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 11:16:33.531898   13098 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 11:16:33.573528   13098 out.go:177] 
	I0531 11:16:31.666134   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:33.666806   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:36.165436   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:38.165555   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:40.666748   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:43.165820   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:45.166474   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:47.668265   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:50.166730   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:52.666409   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:54.669704   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:57.165406   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:59.165439   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:01.167084   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:03.668488   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:06.165122   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:08.165587   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:10.167524   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:12.668897   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:15.163629   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:17.168082   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:19.666008   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:22.165572   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:24.666323   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:26.667326   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:29.164950   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:31.168573   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:33.668016   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:36.165700   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:37.159534   13553 pod_ready.go:81] duration metric: took 4m0.004598971s waiting for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" ...
	E0531 11:17:37.159579   13553 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:17:37.159598   13553 pod_ready.go:38] duration metric: took 4m11.13509036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:17:37.159636   13553 kubeadm.go:630] restartCluster took 4m21.580215027s
	W0531 11:17:37.159760   13553 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:17:37.159787   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:18:15.500549   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.334520068s)
	I0531 11:18:15.500610   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:15.510190   13553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:18:15.517474   13553 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:18:15.517522   13553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:18:15.524667   13553 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:18:15.524695   13553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:18:15.987681   13553 out.go:204]   - Generating certificates and keys ...
	I0531 11:18:16.662161   13553 out.go:204]   - Booting up control plane ...
	I0531 11:18:23.769733   13553 out.go:204]   - Configuring RBAC rules ...
	I0531 11:18:24.147369   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:18:24.147382   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:18:24.147398   13553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:18:24.147482   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.147486   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531111208-2169 minikube.k8s.io/updated_at=2022_05_31T11_18_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.306374   13553 ops.go:34] apiserver oom_adj: -16
	I0531 11:18:24.306508   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.914760   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:25.414598   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:25.914605   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:26.414420   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:26.914518   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:27.414379   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:27.914462   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:28.414351   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:28.914410   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:29.414668   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:29.914253   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:30.414340   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:30.914885   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:31.415853   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:31.914574   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:32.415078   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:32.914346   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:33.414395   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:33.914218   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:34.414344   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:34.914443   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:35.414462   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:35.914974   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.414944   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.914625   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.967118   13553 kubeadm.go:1045] duration metric: took 12.819623878s to wait for elevateKubeSystemPrivileges.
	I0531 11:18:36.967132   13553 kubeadm.go:397] StartCluster complete in 5m21.418074262s
	I0531 11:18:36.967151   13553 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:18:36.967232   13553 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:18:36.967996   13553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:18:37.482167   13553 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531111208-2169" rescaled to 1
	I0531 11:18:37.482224   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:18:37.482227   13553 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:18:37.482262   13553 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:18:37.524538   13553 out.go:177] * Verifying Kubernetes components...
	I0531 11:18:37.482417   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:18:37.524615   13553 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524616   13553 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524617   13553 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524619   13553 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.541569   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:18:37.561493   13553 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531111208-2169"
	W0531 11:18:37.561516   13553 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:18:37.561518   13553 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531111208-2169"
	I0531 11:18:37.561520   13553 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531111208-2169"
	W0531 11:18:37.561531   13553 addons.go:165] addon metrics-server should already be in state true
	W0531 11:18:37.561533   13553 addons.go:165] addon dashboard should already be in state true
	I0531 11:18:37.561541   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:37.561578   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.561578   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.561586   13553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531111208-2169"
	I0531 11:18:37.561612   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.562004   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562026   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562070   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562072   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.602087   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.712519   13553 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:18:37.808694   13553 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:18:37.749890   13553 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:18:37.771648   13553 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:18:37.805914   13553 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531111208-2169" to be "Ready" ...
	I0531 11:18:37.845474   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:18:37.845570   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.847687   13553 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531111208-2169"
	I0531 11:18:37.903833   13553 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:18:37.866630   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0531 11:18:37.903829   13553 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:18:37.903903   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:18:37.903944   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.940564   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:18:37.940576   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:18:37.940601   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.940644   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.943758   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.946451   13553 node_ready.go:49] node "embed-certs-20220531111208-2169" has status "Ready":"True"
	I0531 11:18:37.946466   13553 node_ready.go:38] duration metric: took 101.000042ms waiting for node "embed-certs-20220531111208-2169" to be "Ready" ...
	I0531 11:18:37.946474   13553 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:18:37.962428   13553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2z9z7" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:37.964914   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.042082   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.042600   13553 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:18:38.042609   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:18:38.042656   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:38.044921   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.121207   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.176322   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:18:38.186200   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:18:38.186213   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:18:38.188931   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:18:38.188944   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:18:38.269461   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:18:38.269474   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:18:38.281186   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:18:38.281200   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:18:38.293306   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:18:38.293322   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:18:38.309056   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:18:38.309079   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:18:38.316624   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:18:38.380266   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:18:38.380283   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:18:38.385836   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:18:38.401326   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:18:38.401343   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:18:38.573392   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:18:38.573410   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:18:38.575237   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.013775272s)
	I0531 11:18:38.575260   13553 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:18:38.603368   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:18:38.603385   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:18:38.671143   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:18:38.671156   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:18:38.690338   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:18:38.690359   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:18:38.766383   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:18:38.905067   13553 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531111208-2169"
	I0531 11:18:39.672082   13553 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0531 11:18:39.693219   13553 addons.go:417] enableAddons completed in 2.210946115s
	I0531 11:18:39.980971   13553 pod_ready.go:102] pod "coredns-64897985d-2z9z7" in "kube-system" namespace has status "Ready":"False"
	I0531 11:18:40.983160   13553 pod_ready.go:92] pod "coredns-64897985d-2z9z7" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.983172   13553 pod_ready.go:81] duration metric: took 3.020730185s waiting for pod "coredns-64897985d-2z9z7" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.983178   13553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-d97kt" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.988138   13553 pod_ready.go:92] pod "coredns-64897985d-d97kt" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.988150   13553 pod_ready.go:81] duration metric: took 4.942049ms waiting for pod "coredns-64897985d-d97kt" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.988157   13553 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.993977   13553 pod_ready.go:92] pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.993985   13553 pod_ready.go:81] duration metric: took 5.823114ms waiting for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.993993   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.998424   13553 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.998432   13553 pod_ready.go:81] duration metric: took 4.434783ms waiting for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.998440   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.002931   13553 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.002940   13553 pod_ready.go:81] duration metric: took 4.495408ms waiting for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.002947   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8lnd" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.380632   13553 pod_ready.go:92] pod "kube-proxy-s8lnd" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.380642   13553 pod_ready.go:81] duration metric: took 377.686761ms waiting for pod "kube-proxy-s8lnd" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.380648   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.782684   13553 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.782693   13553 pod_ready.go:81] duration metric: took 402.042479ms waiting for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.782699   13553 pod_ready.go:38] duration metric: took 3.836222865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:18:41.782714   13553 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:18:41.782762   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:18:41.792887   13553 api_server.go:71] duration metric: took 4.310653599s to wait for apiserver process to appear ...
	I0531 11:18:41.792904   13553 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:18:41.792911   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:18:41.798101   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 200:
	ok
	I0531 11:18:41.799178   13553 api_server.go:140] control plane version: v1.23.6
	I0531 11:18:41.799187   13553 api_server.go:130] duration metric: took 6.279368ms to wait for apiserver health ...
	I0531 11:18:41.799193   13553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:18:41.984608   13553 system_pods.go:59] 9 kube-system pods found
	I0531 11:18:41.984621   13553 system_pods.go:61] "coredns-64897985d-2z9z7" [5f4d99e6-3c5c-45b1-a942-86b44e5b650c] Running
	I0531 11:18:41.984625   13553 system_pods.go:61] "coredns-64897985d-d97kt" [be97178d-77d1-4249-833b-041d5a9d0d7c] Running
	I0531 11:18:41.984628   13553 system_pods.go:61] "etcd-embed-certs-20220531111208-2169" [f20662bd-6e19-4fd8-aaa7-4f2e75c0d76e] Running
	I0531 11:18:41.984632   13553 system_pods.go:61] "kube-apiserver-embed-certs-20220531111208-2169" [3b28e116-d7f2-4e27-9cc8-c7b1cced6c9a] Running
	I0531 11:18:41.984635   13553 system_pods.go:61] "kube-controller-manager-embed-certs-20220531111208-2169" [97bf3b0f-0ffc-4ded-90fb-fa83f9b26dbc] Running
	I0531 11:18:41.984640   13553 system_pods.go:61] "kube-proxy-s8lnd" [8dbc512a-0afd-4296-85f4-85c63277d4cb] Running
	I0531 11:18:41.984643   13553 system_pods.go:61] "kube-scheduler-embed-certs-20220531111208-2169" [78f46847-8517-4de1-bc0b-9d09823a3df7] Running
	I0531 11:18:41.984649   13553 system_pods.go:61] "metrics-server-b955d9d8-gt5gx" [84ce306e-102d-4757-8cc7-fb6002c68aeb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:18:41.984656   13553 system_pods.go:61] "storage-provisioner" [bd84ae59-1311-4da9-b670-3127bcdf000a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 11:18:41.984660   13553 system_pods.go:74] duration metric: took 185.464631ms to wait for pod list to return data ...
	I0531 11:18:41.984666   13553 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:18:42.179114   13553 default_sa.go:45] found service account: "default"
	I0531 11:18:42.179124   13553 default_sa.go:55] duration metric: took 194.455319ms for default service account to be created ...
	I0531 11:18:42.179129   13553 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:18:42.383815   13553 system_pods.go:86] 9 kube-system pods found
	I0531 11:18:42.383828   13553 system_pods.go:89] "coredns-64897985d-2z9z7" [5f4d99e6-3c5c-45b1-a942-86b44e5b650c] Running
	I0531 11:18:42.383833   13553 system_pods.go:89] "coredns-64897985d-d97kt" [be97178d-77d1-4249-833b-041d5a9d0d7c] Running
	I0531 11:18:42.383836   13553 system_pods.go:89] "etcd-embed-certs-20220531111208-2169" [f20662bd-6e19-4fd8-aaa7-4f2e75c0d76e] Running
	I0531 11:18:42.383843   13553 system_pods.go:89] "kube-apiserver-embed-certs-20220531111208-2169" [3b28e116-d7f2-4e27-9cc8-c7b1cced6c9a] Running
	I0531 11:18:42.383848   13553 system_pods.go:89] "kube-controller-manager-embed-certs-20220531111208-2169" [97bf3b0f-0ffc-4ded-90fb-fa83f9b26dbc] Running
	I0531 11:18:42.383851   13553 system_pods.go:89] "kube-proxy-s8lnd" [8dbc512a-0afd-4296-85f4-85c63277d4cb] Running
	I0531 11:18:42.383856   13553 system_pods.go:89] "kube-scheduler-embed-certs-20220531111208-2169" [78f46847-8517-4de1-bc0b-9d09823a3df7] Running
	I0531 11:18:42.383863   13553 system_pods.go:89] "metrics-server-b955d9d8-gt5gx" [84ce306e-102d-4757-8cc7-fb6002c68aeb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:18:42.383868   13553 system_pods.go:89] "storage-provisioner" [bd84ae59-1311-4da9-b670-3127bcdf000a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 11:18:42.383872   13553 system_pods.go:126] duration metric: took 204.740399ms to wait for k8s-apps to be running ...
	I0531 11:18:42.383880   13553 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:18:42.383928   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:42.393689   13553 system_svc.go:56] duration metric: took 9.806601ms WaitForService to wait for kubelet.
	I0531 11:18:42.393702   13553 kubeadm.go:572] duration metric: took 4.911473845s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:18:42.393718   13553 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:18:42.581075   13553 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:18:42.581087   13553 node_conditions.go:123] node cpu capacity is 6
	I0531 11:18:42.581093   13553 node_conditions.go:105] duration metric: took 187.37252ms to run NodePressure ...
	I0531 11:18:42.581100   13553 start.go:213] waiting for startup goroutines ...
	I0531 11:18:42.611055   13553 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:18:42.656851   13553 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220531111208-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:13:12 UTC, end at Tue 2022-05-31 18:19:35 UTC. --
	May 31 18:17:53 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:17:53.530805270Z" level=info msg="ignoring event" container=50be302967528b853f7ac1e4dc91c8eeb7d42f60cc651309d6903f94524e9bba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:17:53 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:17:53.689625200Z" level=info msg="ignoring event" container=5ef516794e2d16ae778cdac5c0bae60e34442fa6ec3808460a5155e9df8c41b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:03 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:03.832388895Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=5d812627e01d463cba61766d77f8f3e5a4a0ee396a099804dd4be233ea71ddaa
	May 31 18:18:03 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:03.861354018Z" level=info msg="ignoring event" container=5d812627e01d463cba61766d77f8f3e5a4a0ee396a099804dd4be233ea71ddaa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:13 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:13.950850927Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=56a1f53933ebad06fa301f971826f9c200dcdac554dfd25f543023ab5cf4d11e
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.007074881Z" level=info msg="ignoring event" container=56a1f53933ebad06fa301f971826f9c200dcdac554dfd25f543023ab5cf4d11e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.114097583Z" level=info msg="ignoring event" container=ffeb28ba5d7a11b567047696b639f05c2e7762becb7529a11aea62a55bb55df8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.232622526Z" level=info msg="ignoring event" container=b783e58ea86be2f84f4eb8fbdca755f9890eff4e1ca9a0fb885674e85e76bcb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.333757515Z" level=info msg="ignoring event" container=36bdced7ed2c78fb1567475f7bf9588e620c75a2ba41888ac13ea6ee62f12f85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.441117124Z" level=info msg="ignoring event" container=27e9e8a9e266212f7bd72780c9c5636676703dd3f6e2cdf27a936161b1a34f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.544006045Z" level=info msg="ignoring event" container=423a549384f617c910c0c101fcc090c1c9b2921c3381e4c6e7e7936620a132ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.667522145Z" level=info msg="ignoring event" container=00fab95bb6f9f8825cf3763f64e3981f8ee7c97069b62953e491aecda07399a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.943093574Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.943136712Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.944308479Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:41 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:41.145050314Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:18:43 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:43.821260323Z" level=info msg="ignoring event" container=f06710f13f19b36c643259b19b825e3d4957f0478828348e99abb6be3e65b2a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:43 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:43.907648959Z" level=info msg="ignoring event" container=1202f0672d480b5cc64e47f7f94f4de3d18525ef50293372d613e5294da01335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:47 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:47.153557420Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:18:47 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:47.382833296Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:18:50 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:50.470749796Z" level=info msg="ignoring event" container=33b7eec0627703395ec768bbd69dc1cc3738e32615913d17a7a579dc8b6a6352 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:51 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:51.149508565Z" level=info msg="ignoring event" container=8a199e23fbe54b7ba0237510d7f878dfdc5d2825b2a7779d67538dbeb77ee04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.224498873Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.224688384Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.225918696Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	8a199e23fbe54       a90209bb39e3d                                                                                    44 seconds ago       Exited              dashboard-metrics-scraper   1                   671bb48c6cf94
	1c0d799a50798       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   49 seconds ago       Running             kubernetes-dashboard        0                   412c4551f87c1
	41a4ffb4010c7       6e38f40d628db                                                                                    56 seconds ago       Running             storage-provisioner         0                   3eadf3a2630b1
	1394b7e618bc2       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   62e8a324a5383
	9d0cb6e145a06       4c03754524064                                                                                    58 seconds ago       Running             kube-proxy                  0                   491abeb1d262b
	508a2ace44d0e       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   f6f6ab7d919fd
	084ed714bf4b3       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   31e072757a623
	1661862fa295f       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   e9628b3810c22
	21949b4d201ef       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   5345769750f74
	
	* 
	* ==> coredns [1394b7e618bc] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531111208-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531111208-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531111208-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_18_24_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:18:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531111208-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:19:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220531111208-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                bc1f1430-5bed-499b-aa06-97b0d93d15a0
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-d97kt                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-embed-certs-20220531111208-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-embed-certs-20220531111208-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-embed-certs-20220531111208-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-s8lnd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-embed-certs-20220531111208-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-b955d9d8-gt5gx                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-skg77                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-fbr7m                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 58s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    79s (x5 over 79s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x5 over 79s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  79s (x5 over 79s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  Starting                 72s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                62s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeReady
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [1661862fa295] <==
	* {"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:18:18.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220531111208-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:18:18.726Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:18:18.727Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:18:37.860Z","caller":"traceutil/trace.go:171","msg":"trace[1759487273] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"146.918133ms","start":"2022-05-31T18:18:37.713Z","end":"2022-05-31T18:18:37.860Z","steps":["trace[1759487273] 'process raft request'  (duration: 146.800891ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T18:18:37.955Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.628085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:614"}
	{"level":"info","ts":"2022-05-31T18:18:37.955Z","caller":"traceutil/trace.go:171","msg":"trace[503652069] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:446; }","duration":"176.834549ms","start":"2022-05-31T18:18:37.778Z","end":"2022-05-31T18:18:37.955Z","steps":["trace[503652069] 'agreement among raft nodes before linearized reading'  (duration: 176.597188ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:19:36 up  1:07,  0 users,  load average: 4.75, 1.70, 1.30
	Linux embed-certs-20220531111208-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [21949b4d201e] <==
	* I0531 18:18:21.841474       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:18:21.848803       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 18:18:21.851217       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 18:18:21.851247       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 18:18:22.108578       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:18:22.130533       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:18:22.207439       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:18:22.211237       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:18:22.212032       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:18:22.214774       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:18:22.991545       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:18:23.971177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:18:23.978521       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:18:23.988251       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:18:24.149781       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:18:36.196964       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:18:36.697961       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:18:37.280419       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:18:38.908554       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.100.168.57]
	I0531 18:18:39.608194       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.21.102]
	I0531 18:18:39.616658       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.70.95]
	W0531 18:18:39.729602       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:18:39.729721       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:18:39.729747       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [508a2ace44d0] <==
	* I0531 18:18:38.788273       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-gt5gx"
	I0531 18:18:39.482045       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:18:39.489921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:18:39.495961       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 18:18:39.497592       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.499008       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.503772       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.503827       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.507371       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.510742       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.510754       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:18:39.510874       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.510891       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.518065       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.518173       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.518689       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.518707       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.582440       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.582476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.583991       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.584042       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:18:39.627439       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-fbr7m"
	I0531 18:18:39.691095       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-skg77"
	E0531 18:19:33.149890       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:19:33.153971       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9d0cb6e145a0] <==
	* I0531 18:18:37.259158       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:18:37.259212       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:18:37.259253       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:18:37.275962       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:18:37.276004       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:18:37.276012       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:18:37.276030       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:18:37.276301       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:18:37.278131       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:18:37.278166       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:18:37.278238       1 config.go:317] "Starting service config controller"
	I0531 18:18:37.278242       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:18:37.378950       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:18:37.378964       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [084ed714bf4b] <==
	* W0531 18:18:20.896371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:18:20.896380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:18:20.896798       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:20.896830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:20.897069       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:20.897099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:20.897366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:18:20.897427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:18:20.897498       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:18:20.897527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:18:20.897500       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:18:20.897536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:18:20.898282       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:18:20.898325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:18:21.705018       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:18:21.705064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:18:21.818673       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:18:21.818710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:18:21.851763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:18:21.851812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 18:18:21.946785       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:21.946895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:22.035978       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:18:22.036050       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:18:25.290125       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:13:12 UTC, end at Tue 2022-05-31 18:19:37 UTC. --
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.595823    7224 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.595991    7224 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.596106    7224 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.596336    7224 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644357    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b59e57a-d64d-4c49-bd89-ed3411f5a673-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-skg77\" (UID: \"1b59e57a-d64d-4c49-bd89-ed3411f5a673\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644408    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zsh\" (UniqueName: \"kubernetes.io/projected/be97178d-77d1-4249-833b-041d5a9d0d7c-kube-api-access-j9zsh\") pod \"coredns-64897985d-d97kt\" (UID: \"be97178d-77d1-4249-833b-041d5a9d0d7c\") " pod="kube-system/coredns-64897985d-d97kt"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644428    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbd2\" (UniqueName: \"kubernetes.io/projected/1b59e57a-d64d-4c49-bd89-ed3411f5a673-kube-api-access-kxbd2\") pod \"dashboard-metrics-scraper-56974995fc-skg77\" (UID: \"1b59e57a-d64d-4c49-bd89-ed3411f5a673\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644443    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dbc512a-0afd-4296-85f4-85c63277d4cb-xtables-lock\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644459    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be97178d-77d1-4249-833b-041d5a9d0d7c-config-volume\") pod \"coredns-64897985d-d97kt\" (UID: \"be97178d-77d1-4249-833b-041d5a9d0d7c\") " pod="kube-system/coredns-64897985d-d97kt"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644473    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8dbc512a-0afd-4296-85f4-85c63277d4cb-kube-proxy\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644487    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84ce306e-102d-4757-8cc7-fb6002c68aeb-tmp-dir\") pod \"metrics-server-b955d9d8-gt5gx\" (UID: \"84ce306e-102d-4757-8cc7-fb6002c68aeb\") " pod="kube-system/metrics-server-b955d9d8-gt5gx"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644501    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/60efb3f7-78cf-4254-94bf-7e679c8cd8f9-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-fbr7m\" (UID: \"60efb3f7-78cf-4254-94bf-7e679c8cd8f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fbr7m"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644534    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9dp\" (UniqueName: \"kubernetes.io/projected/60efb3f7-78cf-4254-94bf-7e679c8cd8f9-kube-api-access-fc9dp\") pod \"kubernetes-dashboard-8469778f77-fbr7m\" (UID: \"60efb3f7-78cf-4254-94bf-7e679c8cd8f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fbr7m"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644562    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xngm\" (UniqueName: \"kubernetes.io/projected/84ce306e-102d-4757-8cc7-fb6002c68aeb-kube-api-access-5xngm\") pod \"metrics-server-b955d9d8-gt5gx\" (UID: \"84ce306e-102d-4757-8cc7-fb6002c68aeb\") " pod="kube-system/metrics-server-b955d9d8-gt5gx"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644583    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd84ae59-1311-4da9-b670-3127bcdf000a-tmp\") pod \"storage-provisioner\" (UID: \"bd84ae59-1311-4da9-b670-3127bcdf000a\") " pod="kube-system/storage-provisioner"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644600    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2tf\" (UniqueName: \"kubernetes.io/projected/bd84ae59-1311-4da9-b670-3127bcdf000a-kube-api-access-zj2tf\") pod \"storage-provisioner\" (UID: \"bd84ae59-1311-4da9-b670-3127bcdf000a\") " pod="kube-system/storage-provisioner"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644616    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dbc512a-0afd-4296-85f4-85c63277d4cb-lib-modules\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644664    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxwg\" (UniqueName: \"kubernetes.io/projected/8dbc512a-0afd-4296-85f4-85c63277d4cb-kube-api-access-9qxwg\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644676    7224 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:19:35 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:35.791011    7224 request.go:665] Waited for 1.101159923s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 31 18:19:35 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:35.855354    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.049043    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.224157    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.434360    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220531111208-2169\" already exists" pod="kube-system/etcd-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:36.996715    7224 scope.go:110] "RemoveContainer" containerID="8a199e23fbe54b7ba0237510d7f878dfdc5d2825b2a7779d67538dbeb77ee04c"
	
	* 
	* ==> kubernetes-dashboard [1c0d799a5079] <==
	* 2022/05/31 18:18:46 Using namespace: kubernetes-dashboard
	2022/05/31 18:18:46 Using in-cluster config to connect to apiserver
	2022/05/31 18:18:46 Using secret token for csrf signing
	2022/05/31 18:18:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:18:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:18:46 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:18:46 Generating JWE encryption key
	2022/05/31 18:18:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:18:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:18:47 Initializing JWE encryption key from synchronized object
	2022/05/31 18:18:47 Creating in-cluster Sidecar client
	2022/05/31 18:18:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:18:47 Serving insecurely on HTTP port: 9090
	2022/05/31 18:19:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:18:46 Starting overwatch
	
	* 
	* ==> storage-provisioner [41a4ffb4010c] <==
	* I0531 18:18:39.493352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:18:39.511483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:18:39.511818       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:18:39.520358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:18:39.520601       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1!
	I0531 18:18:39.578804       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d97590e-4964-4b19-98b4-71484d3bd1e1", APIVersion:"v1", ResourceVersion:"535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1 became leader
	I0531 18:18:39.621659       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-gt5gx
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx: exit status 1 (264.559768ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-gt5gx" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531111208-2169
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531111208-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60",
	        "Created": "2022-05-31T18:12:15.374174807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233057,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:13:12.216616031Z",
	            "FinishedAt": "2022-05-31T18:13:10.24521198Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/hosts",
	        "LogPath": "/var/lib/docker/containers/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60/002d31e6b083baff65a1f1c5c5dbd1e70fdfe0073e4b0f0a8136e26582bf7f60-json.log",
	        "Name": "/embed-certs-20220531111208-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531111208-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531111208-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5a41605696b22b2cd91ad9d8c2332a08929394d3a8a272f0f44276eaa789464/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531111208-2169",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531111208-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531111208-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531111208-2169",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531111208-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d414d9fd0d749495cd4dcb4533150b9eff2e751eaf2b1121783a01bc2ac067c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52735"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52736"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52737"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2d414d9fd0d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531111208-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "002d31e6b083",
	                        "embed-certs-20220531111208-2169"
	                    ],
	                    "NetworkID": "c80f14b31c6469883124681d83b6953096f1892ca6f339d77c90232b70b0ad33",
	                    "EndpointID": "934ee3802ba392939a9dc393a27a439b54fcc4961d5924a2e21b0b30cf534b37",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220531111208-2169 logs -n 25
E0531 11:19:39.837252    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220531111208-2169 logs -n 25: (2.844913506s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |               Profile               |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p kubenet-20220531104925-2169                    | kubenet-20220531104925-2169         | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:03 PDT |
	| start   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:03 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --memory=2200                                     |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:04 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:04 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:05 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:08 PDT | 31 May 22 11:08 PDT |
	|         | old-k8s-version-20220531110241-2169               |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:05 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --memory=2200                                     |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                     |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:11 PDT | 31 May 22 11:11 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| logs    | no-preload-20220531110349-2169                    | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                     |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:16 PDT | 31 May 22 11:16 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                     |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                     |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                     |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169     | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:13:10
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:13:10.912075   13553 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:13:10.912340   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912345   13553 out.go:309] Setting ErrFile to fd 2...
	I0531 11:13:10.912349   13553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:13:10.912452   13553 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:13:10.912710   13553 out.go:303] Setting JSON to false
	I0531 11:13:10.927550   13553 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4359,"bootTime":1654016431,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:13:10.927657   13553 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:13:10.950011   13553 out.go:177] * [embed-certs-20220531111208-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:13:10.992542   13553 notify.go:193] Checking for updates...
	I0531 11:13:11.014435   13553 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:13:11.057209   13553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:11.078751   13553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:13:11.100576   13553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:13:11.122489   13553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:13:11.145156   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:11.145842   13553 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:13:11.217087   13553 docker.go:137] docker version: linux-20.10.14
	I0531 11:13:11.217221   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.343566   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.291646587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.387031   13553 out.go:177] * Using the docker driver based on existing profile
	I0531 11:13:11.408143   13553 start.go:284] selected driver: docker
	I0531 11:13:11.408166   13553 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.408292   13553 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:13:11.410542   13553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:13:11.535319   13553 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:13:11.48504376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:13:11.535472   13553 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:13:11.535492   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:11.535500   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:11.535513   13553 start_flags.go:306] config:
	{Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:11.579012   13553 out.go:177] * Starting control plane node embed-certs-20220531111208-2169 in cluster embed-certs-20220531111208-2169
	I0531 11:13:11.600345   13553 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:13:11.622204   13553 out.go:177] * Pulling base image ...
	I0531 11:13:11.664279   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:11.664367   13553 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:13:11.664355   13553 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:13:11.664398   13553 cache.go:57] Caching tarball of preloaded images
	I0531 11:13:11.664595   13553 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:13:11.664618   13553 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:13:11.665489   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:11.728631   13553 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:13:11.728650   13553 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:13:11.728661   13553 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:13:11.728716   13553 start.go:352] acquiring machines lock for embed-certs-20220531111208-2169: {Name:mk6b884d6089a1578cdaf488d7f8fffed1b73a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:13:11.728792   13553 start.go:356] acquired machines lock for "embed-certs-20220531111208-2169" in 57.599µs
	I0531 11:13:11.728839   13553 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:13:11.728846   13553 fix.go:55] fixHost starting: 
	I0531 11:13:11.729063   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:11.794705   13553 fix.go:103] recreateIfNeeded on embed-certs-20220531111208-2169: state=Stopped err=<nil>
	W0531 11:13:11.794739   13553 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:13:11.838307   13553 out.go:177] * Restarting existing docker container for "embed-certs-20220531111208-2169" ...
	I0531 11:13:11.859598   13553 cli_runner.go:164] Run: docker start embed-certs-20220531111208-2169
	I0531 11:13:12.207124   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:13:12.278559   13553 kic.go:416] container "embed-certs-20220531111208-2169" state is running.
	I0531 11:13:12.279154   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.351999   13553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/config.json ...
	I0531 11:13:12.352414   13553 machine.go:88] provisioning docker machine ...
	I0531 11:13:12.352438   13553 ubuntu.go:169] provisioning hostname "embed-certs-20220531111208-2169"
	I0531 11:13:12.352499   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.426073   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.426254   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.426271   13553 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531111208-2169 && echo "embed-certs-20220531111208-2169" | sudo tee /etc/hostname
	I0531 11:13:12.546985   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531111208-2169
	
	I0531 11:13:12.547055   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:12.667019   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:12.667153   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:12.667167   13553 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531111208-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531111208-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531111208-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:13:12.778841   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:12.778871   13553 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:13:12.778892   13553 ubuntu.go:177] setting up certificates
	I0531 11:13:12.778902   13553 provision.go:83] configureAuth start
	I0531 11:13:12.778963   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:12.851177   13553 provision.go:138] copyHostCerts
	I0531 11:13:12.851272   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:13:12.851284   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:13:12.851409   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:13:12.851635   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:13:12.851644   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:13:12.851702   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:13:12.851836   13553 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:13:12.851845   13553 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:13:12.851899   13553 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:13:12.852005   13553 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531111208-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531111208-2169]
	I0531 11:13:13.012300   13553 provision.go:172] copyRemoteCerts
	I0531 11:13:13.012367   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:13:13.012411   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.083950   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.163687   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:13:13.181984   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:13:13.202769   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:13:13.220771   13553 provision.go:86] duration metric: configureAuth took 441.859262ms
	I0531 11:13:13.220785   13553 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:13:13.220931   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:13:13.220996   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.290761   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.290928   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.290938   13553 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:13:13.403887   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:13:13.403899   13553 ubuntu.go:71] root file system type: overlay
	I0531 11:13:13.404028   13553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:13:13.404100   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.473905   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.474051   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.474101   13553 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:13:13.592185   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:13:13.592261   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.662203   13553 main.go:134] libmachine: Using SSH client type: native
	I0531 11:13:13.662343   13553 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52734 <nil> <nil>}
	I0531 11:13:13.662357   13553 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:13:13.777966   13553 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:13:13.777983   13553 machine.go:91] provisioned docker machine in 1.425577512s
	I0531 11:13:13.777991   13553 start.go:306] post-start starting for "embed-certs-20220531111208-2169" (driver="docker")
	I0531 11:13:13.777998   13553 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:13:13.778067   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:13:13.778116   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:13.848237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:13.932021   13553 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:13:13.935470   13553 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:13:13.935482   13553 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:13:13.935489   13553 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:13:13.935497   13553 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:13:13.935504   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:13:13.935616   13553 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:13:13.935749   13553 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:13:13.935898   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:13:13.942941   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:13.960034   13553 start.go:309] post-start completed in 182.035145ms
	I0531 11:13:13.960102   13553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:13:13.960153   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.029714   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.110010   13553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:13:14.114571   13553 fix.go:57] fixHost completed within 2.385751879s
	I0531 11:13:14.114581   13553 start.go:81] releasing machines lock for "embed-certs-20220531111208-2169", held for 2.385811827s
	I0531 11:13:14.114647   13553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531111208-2169
	I0531 11:13:14.183914   13553 ssh_runner.go:195] Run: systemctl --version
	I0531 11:13:14.183932   13553 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:13:14.183988   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.183999   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:14.259237   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.261186   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:13:14.338523   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:13:14.475654   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.485255   13553 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:13:14.485320   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:13:14.495800   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:13:14.508692   13553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:13:14.578970   13553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:13:14.646485   13553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:13:14.656123   13553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:13:14.719480   13553 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:13:14.729422   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.764747   13553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:13:14.842937   13553 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:13:14.843097   13553 cli_runner.go:164] Run: docker exec -t embed-certs-20220531111208-2169 dig +short host.docker.internal
	I0531 11:13:14.980735   13553 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:13:14.980851   13553 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:13:14.985209   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:14.995189   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.066041   13553 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:13:15.066120   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.099230   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.099246   13553 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:13:15.099322   13553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:13:15.128293   13553 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:13:15.128309   13553 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:13:15.128404   13553 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:13:15.201388   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:15.201399   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:15.201412   13553 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:13:15.201426   13553 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531111208-2169 NodeName:embed-certs-20220531111208-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:13:15.201536   13553 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220531111208-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:13:15.201613   13553 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220531111208-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:13:15.201672   13553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:13:15.209154   13553 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:13:15.209203   13553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:13:15.216165   13553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0531 11:13:15.228487   13553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:13:15.241530   13553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0531 11:13:15.253811   13553 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:13:15.257550   13553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:13:15.266790   13553 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169 for IP: 192.168.58.2
	I0531 11:13:15.266894   13553 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:13:15.266943   13553 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:13:15.267029   13553 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/client.key
	I0531 11:13:15.267089   13553 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key.cee25041
	I0531 11:13:15.267135   13553 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key
	I0531 11:13:15.267327   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:13:15.267368   13553 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:13:15.267379   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:13:15.267410   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:13:15.267442   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:13:15.267475   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:13:15.267531   13553 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:13:15.268077   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:13:15.286065   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:13:15.303481   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:13:15.320612   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531111208-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:13:15.338097   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:13:15.354546   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:13:15.370990   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:13:15.387662   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:13:15.404111   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:13:15.420738   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:13:15.437866   13553 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:13:15.454492   13553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:13:15.467247   13553 ssh_runner.go:195] Run: openssl version
	I0531 11:13:15.472671   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:13:15.480357   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484359   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.484403   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:13:15.489653   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:13:15.496718   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:13:15.504292   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508441   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.508479   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:13:15.513962   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:13:15.521223   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:13:15.529012   13553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533202   13553 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.533243   13553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:13:15.538555   13553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:13:15.545740   13553 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531111208-2169 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531111208-2169 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:13:15.545831   13553 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:15.574436   13553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:13:15.582575   13553 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:13:15.582590   13553 kubeadm.go:626] restartCluster start
	I0531 11:13:15.582637   13553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:13:15.589452   13553 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.589508   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:13:15.658995   13553 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531111208-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:13:15.659167   13553 kubeconfig.go:127] "embed-certs-20220531111208-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:13:15.659511   13553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:13:15.660882   13553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:13:15.668383   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.668428   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.676694   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:15.878830   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:15.879010   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:15.890181   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.078851   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.079013   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.089798   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.277533   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.277607   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.287591   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.478834   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.478983   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.490523   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.678854   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.679031   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.689622   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:16.878660   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:16.878750   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:16.890310   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.078866   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.079037   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.089324   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.278043   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.278132   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.287546   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.478843   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.478972   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.489976   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.677174   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.677289   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.687745   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:17.878762   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:17.878861   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:17.889375   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.076807   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.076876   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.085455   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.278341   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.278493   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.289377   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.476936   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.477072   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.486862   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.677619   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.677776   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.688531   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.688541   13553 api_server.go:165] Checking apiserver status ...
	I0531 11:13:18.688589   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:13:18.696891   13553 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.696907   13553 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:13:18.696918   13553 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:13:18.696972   13553 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:13:18.727258   13553 docker.go:442] Stopping containers: [a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56]
	I0531 11:13:18.727328   13553 ssh_runner.go:195] Run: docker stop a90b3415795b f36f1b8ec616 151bcff24641 b44621a18266 2d9e1bd569b5 a9acd433a353 3df64dbfd2e2 7fc0f47f65d2 8ce1e9e63077 862692e6d3d2 19686116a07e 2784b5f463be d5a4a6345359 dcebe9e24d2f e6dac4e073bd b474066ffe56
	I0531 11:13:18.758625   13553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:13:18.769960   13553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:13:18.778599   13553 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 18:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 18:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:12 /etc/kubernetes/scheduler.conf
	
	I0531 11:13:18.778676   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:13:18.786996   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:13:18.795010   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.802414   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.802469   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:13:18.810402   13553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:13:18.818706   13553 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:13:18.818775   13553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:13:18.825849   13553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833007   13553 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:13:18.833017   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:18.877378   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:19.935016   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057629084s)
	I0531 11:13:19.935035   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.058140   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.103466   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:20.152115   13553 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:13:20.152176   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:20.663756   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.164475   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:13:21.215674   13553 api_server.go:71] duration metric: took 1.063576049s to wait for apiserver process to appear ...
	I0531 11:13:21.215692   13553 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:13:21.215704   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:21.216920   13553 api_server.go:256] stopped: https://127.0.0.1:52733/healthz: Get "https://127.0.0.1:52733/healthz": EOF
	I0531 11:13:21.718992   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.167314   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:13:24.167334   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:13:24.217141   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.222557   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.222574   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:24.719071   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:24.726442   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:24.726459   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.216999   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.222726   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:13:25.222741   13553 api_server.go:102] status: https://127.0.0.1:52733/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:13:25.717101   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:13:25.724848   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 200:
	ok
	I0531 11:13:25.732599   13553 api_server.go:140] control plane version: v1.23.6
	I0531 11:13:25.732611   13553 api_server.go:130] duration metric: took 4.516969769s to wait for apiserver health ...
	I0531 11:13:25.732616   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:13:25.732621   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:13:25.732632   13553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:13:25.741842   13553 system_pods.go:59] 8 kube-system pods found
	I0531 11:13:25.741859   13553 system_pods.go:61] "coredns-64897985d-45rxk" [1d1af550-c7eb-4d3d-a99e-ea74b583e84d] Running
	I0531 11:13:25.741863   13553 system_pods.go:61] "etcd-embed-certs-20220531111208-2169" [8b0ce277-ff5a-4e5b-b019-42c569689abb] Running
	I0531 11:13:25.741867   13553 system_pods.go:61] "kube-apiserver-embed-certs-20220531111208-2169" [b2087c02-761e-4919-8b92-9c3ae53f2821] Running
	I0531 11:13:25.741876   13553 system_pods.go:61] "kube-controller-manager-embed-certs-20220531111208-2169" [a56fc9fd-2eee-4f73-904d-0de881e33d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:13:25.741881   13553 system_pods.go:61] "kube-proxy-lgwn5" [9aad1763-1139-4bed-8c7d-a956e68d3386] Running
	I0531 11:13:25.741885   13553 system_pods.go:61] "kube-scheduler-embed-certs-20220531111208-2169" [9297a013-1420-42ab-8c26-7352aca786b3] Running
	I0531 11:13:25.741890   13553 system_pods.go:61] "metrics-server-b955d9d8-jbxp2" [ad7ca455-4720-4932-95d3-703a51595cb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:13:25.741895   13553 system_pods.go:61] "storage-provisioner" [d7df490e-a02b-4db2-912b-0d64caf0924b] Running
	I0531 11:13:25.741900   13553 system_pods.go:74] duration metric: took 9.263068ms to wait for pod list to return data ...
	I0531 11:13:25.741905   13553 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:13:25.745283   13553 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:13:25.745301   13553 node_conditions.go:123] node cpu capacity is 6
	I0531 11:13:25.745322   13553 node_conditions.go:105] duration metric: took 3.412768ms to run NodePressure ...
	I0531 11:13:25.745359   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:13:26.023161   13553 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027527   13553 kubeadm.go:777] kubelet initialised
	I0531 11:13:26.027540   13553 kubeadm.go:778] duration metric: took 4.364923ms waiting for restarted kubelet to initialise ...
	I0531 11:13:26.027549   13553 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:13:26.034285   13553 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086851   13553 pod_ready.go:92] pod "coredns-64897985d-45rxk" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.086875   13553 pod_ready.go:81] duration metric: took 52.574215ms waiting for pod "coredns-64897985d-45rxk" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.086892   13553 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093072   13553 pod_ready.go:92] pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.093082   13553 pod_ready.go:81] duration metric: took 6.180628ms waiting for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.093089   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099122   13553 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:26.099133   13553 pod_ready.go:81] duration metric: took 6.039477ms waiting for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:26.099139   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:28.146822   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:30.645890   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:33.144139   13553 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:34.643120   13553 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.643133   13553 pod_ready.go:81] duration metric: took 8.544092302s waiting for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.643140   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647169   13553 pod_ready.go:92] pod "kube-proxy-lgwn5" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:34.647176   13553 pod_ready.go:81] duration metric: took 4.0327ms waiting for pod "kube-proxy-lgwn5" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:34.647182   13553 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:36.657938   13553 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:37.157814   13553 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:13:37.157828   13553 pod_ready.go:81] duration metric: took 2.510670323s waiting for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:37.157835   13553 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" ...
	I0531 11:13:39.168841   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:41.170734   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:43.669021   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:45.669098   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:47.671012   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:50.168445   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:52.170563   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:54.668999   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:57.170207   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:13:59.170988   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:01.670072   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:03.670082   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:06.167570   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:08.167638   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:10.169806   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:12.670944   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:15.169753   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:17.667759   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:19.670165   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:21.670624   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:24.168819   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:26.669956   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:28.670940   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	W0531 11:14:33.760041   13098 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 11:14:33.760073   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:14:34.182940   13098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:14:34.192616   13098 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:14:34.192666   13098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:14:34.200294   13098 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:14:34.200312   13098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:14:31.169348   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:33.668612   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:34.901603   13098 out.go:204]   - Generating certificates and keys ...
	I0531 11:14:36.104005   13098 out.go:204]   - Booting up control plane ...
	I0531 11:14:36.168890   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:38.669937   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:41.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:43.168404   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:45.668439   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:47.670324   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:50.169250   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:52.666875   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:54.666872   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:56.667829   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:14:58.668136   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:00.668681   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:03.166600   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:05.168912   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:07.668122   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:09.669752   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:12.167201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:14.169370   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:16.665960   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:18.669950   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:21.166794   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:23.168174   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:25.666653   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:27.669565   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:30.166540   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:32.168336   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:34.666131   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:36.668357   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:38.669065   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:41.168929   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:43.667201   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:45.667894   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:48.166425   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:50.168229   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:52.666884   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:54.667615   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:56.667805   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:15:59.169231   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:01.665509   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:03.669718   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:06.165318   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:08.166716   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:10.167687   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:12.668552   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:15.166558   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:17.167341   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:19.665622   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:21.667455   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:24.169405   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:26.667161   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:29.166126   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:31.020061   13098 kubeadm.go:397] StartCluster complete in 8m1.555975545s
	I0531 11:16:31.020140   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0531 11:16:31.050974   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.050987   13098 logs.go:276] No container was found matching "kube-apiserver"
	I0531 11:16:31.051042   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0531 11:16:31.080367   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.080379   13098 logs.go:276] No container was found matching "etcd"
	I0531 11:16:31.080436   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0531 11:16:31.109454   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.109467   13098 logs.go:276] No container was found matching "coredns"
	I0531 11:16:31.109523   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0531 11:16:31.138029   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.138040   13098 logs.go:276] No container was found matching "kube-scheduler"
	I0531 11:16:31.138093   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0531 11:16:31.168696   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.168708   13098 logs.go:276] No container was found matching "kube-proxy"
	I0531 11:16:31.168763   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0531 11:16:31.198083   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.198100   13098 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0531 11:16:31.198162   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0531 11:16:31.226599   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.226611   13098 logs.go:276] No container was found matching "storage-provisioner"
	I0531 11:16:31.226669   13098 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0531 11:16:31.256444   13098 logs.go:274] 0 containers: []
	W0531 11:16:31.256457   13098 logs.go:276] No container was found matching "kube-controller-manager"
	I0531 11:16:31.256464   13098 logs.go:123] Gathering logs for kubelet ...
	I0531 11:16:31.256471   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0531 11:16:31.295837   13098 logs.go:123] Gathering logs for dmesg ...
	I0531 11:16:31.295851   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 11:16:31.307624   13098 logs.go:123] Gathering logs for describe nodes ...
	I0531 11:16:31.307639   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 11:16:31.359917   13098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 11:16:31.359927   13098 logs.go:123] Gathering logs for Docker ...
	I0531 11:16:31.359936   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0531 11:16:31.372199   13098 logs.go:123] Gathering logs for container status ...
	I0531 11:16:31.372211   13098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 11:16:33.427067   13098 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054868747s)
	W0531 11:16:33.427193   13098 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 11:16:33.427208   13098 out.go:239] * 
	W0531 11:16:33.427350   13098 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.427367   13098 out.go:239] * 
	W0531 11:16:33.427900   13098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 11:16:33.489529   13098 out.go:177] 
	W0531 11:16:33.531716   13098 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 11:16:33.531846   13098 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 11:16:33.531898   13098 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 11:16:33.573528   13098 out.go:177] 
	I0531 11:16:31.666134   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:33.666806   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:36.165436   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:38.165555   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:40.666748   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:43.165820   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:45.166474   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:47.668265   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:50.166730   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:52.666409   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:54.669704   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:57.165406   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:16:59.165439   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:01.167084   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:03.668488   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:06.165122   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:08.165587   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:10.167524   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:12.668897   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:15.163629   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:17.168082   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:19.666008   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:22.165572   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:24.666323   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:26.667326   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:29.164950   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:31.168573   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:33.668016   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:36.165700   13553 pod_ready.go:102] pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace has status "Ready":"False"
	I0531 11:17:37.159534   13553 pod_ready.go:81] duration metric: took 4m0.004598971s waiting for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" ...
	E0531 11:17:37.159579   13553 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-jbxp2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:17:37.159598   13553 pod_ready.go:38] duration metric: took 4m11.13509036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:17:37.159636   13553 kubeadm.go:630] restartCluster took 4m21.580215027s
	W0531 11:17:37.159760   13553 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:17:37.159787   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:18:15.500549   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.334520068s)
	I0531 11:18:15.500610   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:15.510190   13553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:18:15.517474   13553 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:18:15.517522   13553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:18:15.524667   13553 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:18:15.524695   13553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:18:15.987681   13553 out.go:204]   - Generating certificates and keys ...
	I0531 11:18:16.662161   13553 out.go:204]   - Booting up control plane ...
	I0531 11:18:23.769733   13553 out.go:204]   - Configuring RBAC rules ...
	I0531 11:18:24.147369   13553 cni.go:95] Creating CNI manager for ""
	I0531 11:18:24.147382   13553 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:18:24.147398   13553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:18:24.147482   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.147486   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531111208-2169 minikube.k8s.io/updated_at=2022_05_31T11_18_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.306374   13553 ops.go:34] apiserver oom_adj: -16
	I0531 11:18:24.306508   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:24.914760   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:25.414598   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:25.914605   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:26.414420   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:26.914518   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:27.414379   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:27.914462   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:28.414351   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:28.914410   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:29.414668   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:29.914253   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:30.414340   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:30.914885   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:31.415853   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:31.914574   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:32.415078   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:32.914346   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:33.414395   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:33.914218   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:34.414344   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:34.914443   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:35.414462   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:35.914974   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.414944   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.914625   13553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:18:36.967118   13553 kubeadm.go:1045] duration metric: took 12.819623878s to wait for elevateKubeSystemPrivileges.
	I0531 11:18:36.967132   13553 kubeadm.go:397] StartCluster complete in 5m21.418074262s
	I0531 11:18:36.967151   13553 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:18:36.967232   13553 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:18:36.967996   13553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:18:37.482167   13553 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531111208-2169" rescaled to 1
	I0531 11:18:37.482224   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:18:37.482227   13553 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:18:37.482262   13553 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:18:37.524538   13553 out.go:177] * Verifying Kubernetes components...
	I0531 11:18:37.482417   13553 config.go:178] Loaded profile config "embed-certs-20220531111208-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:18:37.524615   13553 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524616   13553 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524617   13553 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.524619   13553 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531111208-2169"
	I0531 11:18:37.541569   13553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:18:37.561493   13553 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531111208-2169"
	W0531 11:18:37.561516   13553 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:18:37.561518   13553 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531111208-2169"
	I0531 11:18:37.561520   13553 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531111208-2169"
	W0531 11:18:37.561531   13553 addons.go:165] addon metrics-server should already be in state true
	W0531 11:18:37.561533   13553 addons.go:165] addon dashboard should already be in state true
	I0531 11:18:37.561541   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:37.561578   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.561578   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.561586   13553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531111208-2169"
	I0531 11:18:37.561612   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.562004   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562026   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562070   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.562072   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.602087   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.712519   13553 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:18:37.808694   13553 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:18:37.749890   13553 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:18:37.771648   13553 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:18:37.805914   13553 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531111208-2169" to be "Ready" ...
	I0531 11:18:37.845474   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:18:37.845570   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.847687   13553 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531111208-2169"
	I0531 11:18:37.903833   13553 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:18:37.866630   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0531 11:18:37.903829   13553 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:18:37.903903   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:18:37.903944   13553 host.go:66] Checking if "embed-certs-20220531111208-2169" exists ...
	I0531 11:18:37.940564   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:18:37.940576   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:18:37.940601   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.940644   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:37.943758   13553 cli_runner.go:164] Run: docker container inspect embed-certs-20220531111208-2169 --format={{.State.Status}}
	I0531 11:18:37.946451   13553 node_ready.go:49] node "embed-certs-20220531111208-2169" has status "Ready":"True"
	I0531 11:18:37.946466   13553 node_ready.go:38] duration metric: took 101.000042ms waiting for node "embed-certs-20220531111208-2169" to be "Ready" ...
	I0531 11:18:37.946474   13553 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:18:37.962428   13553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2z9z7" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:37.964914   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.042082   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.042600   13553 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:18:38.042609   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:18:38.042656   13553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531111208-2169
	I0531 11:18:38.044921   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.121207   13553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52734 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531111208-2169/id_rsa Username:docker}
	I0531 11:18:38.176322   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:18:38.186200   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:18:38.186213   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:18:38.188931   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:18:38.188944   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:18:38.269461   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:18:38.269474   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:18:38.281186   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:18:38.281200   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:18:38.293306   13553 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:18:38.293322   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:18:38.309056   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:18:38.309079   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:18:38.316624   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:18:38.380266   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:18:38.380283   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:18:38.385836   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:18:38.401326   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:18:38.401343   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:18:38.573392   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:18:38.573410   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:18:38.575237   13553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.013775272s)
	I0531 11:18:38.575260   13553 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:18:38.603368   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:18:38.603385   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:18:38.671143   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:18:38.671156   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:18:38.690338   13553 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:18:38.690359   13553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:18:38.766383   13553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:18:38.905067   13553 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531111208-2169"
	I0531 11:18:39.672082   13553 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0531 11:18:39.693219   13553 addons.go:417] enableAddons completed in 2.210946115s
	I0531 11:18:39.980971   13553 pod_ready.go:102] pod "coredns-64897985d-2z9z7" in "kube-system" namespace has status "Ready":"False"
	I0531 11:18:40.983160   13553 pod_ready.go:92] pod "coredns-64897985d-2z9z7" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.983172   13553 pod_ready.go:81] duration metric: took 3.020730185s waiting for pod "coredns-64897985d-2z9z7" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.983178   13553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-d97kt" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.988138   13553 pod_ready.go:92] pod "coredns-64897985d-d97kt" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.988150   13553 pod_ready.go:81] duration metric: took 4.942049ms waiting for pod "coredns-64897985d-d97kt" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.988157   13553 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.993977   13553 pod_ready.go:92] pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.993985   13553 pod_ready.go:81] duration metric: took 5.823114ms waiting for pod "etcd-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.993993   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.998424   13553 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:40.998432   13553 pod_ready.go:81] duration metric: took 4.434783ms waiting for pod "kube-apiserver-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:40.998440   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.002931   13553 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.002940   13553 pod_ready.go:81] duration metric: took 4.495408ms waiting for pod "kube-controller-manager-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.002947   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8lnd" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.380632   13553 pod_ready.go:92] pod "kube-proxy-s8lnd" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.380642   13553 pod_ready.go:81] duration metric: took 377.686761ms waiting for pod "kube-proxy-s8lnd" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.380648   13553 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.782684   13553 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:18:41.782693   13553 pod_ready.go:81] duration metric: took 402.042479ms waiting for pod "kube-scheduler-embed-certs-20220531111208-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:18:41.782699   13553 pod_ready.go:38] duration metric: took 3.836222865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:18:41.782714   13553 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:18:41.782762   13553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:18:41.792887   13553 api_server.go:71] duration metric: took 4.310653599s to wait for apiserver process to appear ...
	I0531 11:18:41.792904   13553 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:18:41.792911   13553 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52733/healthz ...
	I0531 11:18:41.798101   13553 api_server.go:266] https://127.0.0.1:52733/healthz returned 200:
	ok
	I0531 11:18:41.799178   13553 api_server.go:140] control plane version: v1.23.6
	I0531 11:18:41.799187   13553 api_server.go:130] duration metric: took 6.279368ms to wait for apiserver health ...
	I0531 11:18:41.799193   13553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:18:41.984608   13553 system_pods.go:59] 9 kube-system pods found
	I0531 11:18:41.984621   13553 system_pods.go:61] "coredns-64897985d-2z9z7" [5f4d99e6-3c5c-45b1-a942-86b44e5b650c] Running
	I0531 11:18:41.984625   13553 system_pods.go:61] "coredns-64897985d-d97kt" [be97178d-77d1-4249-833b-041d5a9d0d7c] Running
	I0531 11:18:41.984628   13553 system_pods.go:61] "etcd-embed-certs-20220531111208-2169" [f20662bd-6e19-4fd8-aaa7-4f2e75c0d76e] Running
	I0531 11:18:41.984632   13553 system_pods.go:61] "kube-apiserver-embed-certs-20220531111208-2169" [3b28e116-d7f2-4e27-9cc8-c7b1cced6c9a] Running
	I0531 11:18:41.984635   13553 system_pods.go:61] "kube-controller-manager-embed-certs-20220531111208-2169" [97bf3b0f-0ffc-4ded-90fb-fa83f9b26dbc] Running
	I0531 11:18:41.984640   13553 system_pods.go:61] "kube-proxy-s8lnd" [8dbc512a-0afd-4296-85f4-85c63277d4cb] Running
	I0531 11:18:41.984643   13553 system_pods.go:61] "kube-scheduler-embed-certs-20220531111208-2169" [78f46847-8517-4de1-bc0b-9d09823a3df7] Running
	I0531 11:18:41.984649   13553 system_pods.go:61] "metrics-server-b955d9d8-gt5gx" [84ce306e-102d-4757-8cc7-fb6002c68aeb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:18:41.984656   13553 system_pods.go:61] "storage-provisioner" [bd84ae59-1311-4da9-b670-3127bcdf000a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 11:18:41.984660   13553 system_pods.go:74] duration metric: took 185.464631ms to wait for pod list to return data ...
	I0531 11:18:41.984666   13553 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:18:42.179114   13553 default_sa.go:45] found service account: "default"
	I0531 11:18:42.179124   13553 default_sa.go:55] duration metric: took 194.455319ms for default service account to be created ...
	I0531 11:18:42.179129   13553 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:18:42.383815   13553 system_pods.go:86] 9 kube-system pods found
	I0531 11:18:42.383828   13553 system_pods.go:89] "coredns-64897985d-2z9z7" [5f4d99e6-3c5c-45b1-a942-86b44e5b650c] Running
	I0531 11:18:42.383833   13553 system_pods.go:89] "coredns-64897985d-d97kt" [be97178d-77d1-4249-833b-041d5a9d0d7c] Running
	I0531 11:18:42.383836   13553 system_pods.go:89] "etcd-embed-certs-20220531111208-2169" [f20662bd-6e19-4fd8-aaa7-4f2e75c0d76e] Running
	I0531 11:18:42.383843   13553 system_pods.go:89] "kube-apiserver-embed-certs-20220531111208-2169" [3b28e116-d7f2-4e27-9cc8-c7b1cced6c9a] Running
	I0531 11:18:42.383848   13553 system_pods.go:89] "kube-controller-manager-embed-certs-20220531111208-2169" [97bf3b0f-0ffc-4ded-90fb-fa83f9b26dbc] Running
	I0531 11:18:42.383851   13553 system_pods.go:89] "kube-proxy-s8lnd" [8dbc512a-0afd-4296-85f4-85c63277d4cb] Running
	I0531 11:18:42.383856   13553 system_pods.go:89] "kube-scheduler-embed-certs-20220531111208-2169" [78f46847-8517-4de1-bc0b-9d09823a3df7] Running
	I0531 11:18:42.383863   13553 system_pods.go:89] "metrics-server-b955d9d8-gt5gx" [84ce306e-102d-4757-8cc7-fb6002c68aeb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:18:42.383868   13553 system_pods.go:89] "storage-provisioner" [bd84ae59-1311-4da9-b670-3127bcdf000a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 11:18:42.383872   13553 system_pods.go:126] duration metric: took 204.740399ms to wait for k8s-apps to be running ...
	I0531 11:18:42.383880   13553 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:18:42.383928   13553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:18:42.393689   13553 system_svc.go:56] duration metric: took 9.806601ms WaitForService to wait for kubelet.
	I0531 11:18:42.393702   13553 kubeadm.go:572] duration metric: took 4.911473845s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:18:42.393718   13553 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:18:42.581075   13553 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:18:42.581087   13553 node_conditions.go:123] node cpu capacity is 6
	I0531 11:18:42.581093   13553 node_conditions.go:105] duration metric: took 187.37252ms to run NodePressure ...
	I0531 11:18:42.581100   13553 start.go:213] waiting for startup goroutines ...
	I0531 11:18:42.611055   13553 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:18:42.656851   13553 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220531111208-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:13:12 UTC, end at Tue 2022-05-31 18:19:40 UTC. --
	May 31 18:18:13 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:13.950850927Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=56a1f53933ebad06fa301f971826f9c200dcdac554dfd25f543023ab5cf4d11e
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.007074881Z" level=info msg="ignoring event" container=56a1f53933ebad06fa301f971826f9c200dcdac554dfd25f543023ab5cf4d11e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.114097583Z" level=info msg="ignoring event" container=ffeb28ba5d7a11b567047696b639f05c2e7762becb7529a11aea62a55bb55df8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.232622526Z" level=info msg="ignoring event" container=b783e58ea86be2f84f4eb8fbdca755f9890eff4e1ca9a0fb885674e85e76bcb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.333757515Z" level=info msg="ignoring event" container=36bdced7ed2c78fb1567475f7bf9588e620c75a2ba41888ac13ea6ee62f12f85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.441117124Z" level=info msg="ignoring event" container=27e9e8a9e266212f7bd72780c9c5636676703dd3f6e2cdf27a936161b1a34f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.544006045Z" level=info msg="ignoring event" container=423a549384f617c910c0c101fcc090c1c9b2921c3381e4c6e7e7936620a132ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:14 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:14.667522145Z" level=info msg="ignoring event" container=00fab95bb6f9f8825cf3763f64e3981f8ee7c97069b62953e491aecda07399a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.943093574Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.943136712Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:39 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:39.944308479Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:41 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:41.145050314Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:18:43 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:43.821260323Z" level=info msg="ignoring event" container=f06710f13f19b36c643259b19b825e3d4957f0478828348e99abb6be3e65b2a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:43 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:43.907648959Z" level=info msg="ignoring event" container=1202f0672d480b5cc64e47f7f94f4de3d18525ef50293372d613e5294da01335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:47 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:47.153557420Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:18:47 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:47.382833296Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:18:50 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:50.470749796Z" level=info msg="ignoring event" container=33b7eec0627703395ec768bbd69dc1cc3738e32615913d17a7a579dc8b6a6352 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:51 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:51.149508565Z" level=info msg="ignoring event" container=8a199e23fbe54b7ba0237510d7f878dfdc5d2825b2a7779d67538dbeb77ee04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.224498873Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.224688384Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:18:54 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:18:54.225918696Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:19:37 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:19:37.293397636Z" level=info msg="ignoring event" container=9892f0e49eec51dce0704bdc055b7323cdff0f87a344cd00856715b4e6a98665 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:19:37 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:19:37.918114753Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:19:37 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:19:37.918307593Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:19:37 embed-certs-20220531111208-2169 dockerd[129]: time="2022-05-31T18:19:37.919832869Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	9892f0e49eec5       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   671bb48c6cf94
	1c0d799a50798       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   54 seconds ago       Running             kubernetes-dashboard        0                   412c4551f87c1
	41a4ffb4010c7       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   3eadf3a2630b1
	1394b7e618bc2       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   62e8a324a5383
	9d0cb6e145a06       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   491abeb1d262b
	508a2ace44d0e       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   f6f6ab7d919fd
	084ed714bf4b3       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   31e072757a623
	1661862fa295f       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   e9628b3810c22
	21949b4d201ef       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   5345769750f74
	
	* 
	* ==> coredns [1394b7e618bc] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531111208-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531111208-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531111208-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_18_24_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:18:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531111208-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:18:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:19:33 +0000   Tue, 31 May 2022 18:19:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220531111208-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                bc1f1430-5bed-499b-aa06-97b0d93d15a0
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-d97kt                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-embed-certs-20220531111208-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 kube-apiserver-embed-certs-20220531111208-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-embed-certs-20220531111208-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-s8lnd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-embed-certs-20220531111208-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-b955d9d8-gt5gx                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-skg77                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-fbr7m                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 63s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    83s (x5 over 83s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x5 over 83s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  83s (x5 over 83s)  kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                66s                kubelet     Node embed-certs-20220531111208-2169 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node embed-certs-20220531111208-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [1661862fa295] <==
	* {"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:18:18.381Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:18:18.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:18:18.721Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220531111208-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:18:18.725Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:18:18.726Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:18:18.727Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:18:37.860Z","caller":"traceutil/trace.go:171","msg":"trace[1759487273] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"146.918133ms","start":"2022-05-31T18:18:37.713Z","end":"2022-05-31T18:18:37.860Z","steps":["trace[1759487273] 'process raft request'  (duration: 146.800891ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T18:18:37.955Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.628085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:614"}
	{"level":"info","ts":"2022-05-31T18:18:37.955Z","caller":"traceutil/trace.go:171","msg":"trace[503652069] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:446; }","duration":"176.834549ms","start":"2022-05-31T18:18:37.778Z","end":"2022-05-31T18:18:37.955Z","steps":["trace[503652069] 'agreement among raft nodes before linearized reading'  (duration: 176.597188ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:19:40 up  1:07,  0 users,  load average: 4.37, 1.67, 1.29
	Linux embed-certs-20220531111208-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [21949b4d201e] <==
	* I0531 18:18:22.108578       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:18:22.130533       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:18:22.207439       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:18:22.211237       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:18:22.212032       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:18:22.214774       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:18:22.991545       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:18:23.971177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:18:23.978521       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:18:23.988251       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:18:24.149781       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:18:36.196964       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:18:36.697961       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:18:37.280419       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:18:38.908554       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.100.168.57]
	I0531 18:18:39.608194       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.21.102]
	I0531 18:18:39.616658       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.70.95]
	W0531 18:18:39.729602       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:18:39.729721       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:18:39.729747       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:19:39.687732       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:19:39.687807       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:19:39.687816       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [508a2ace44d0] <==
	* I0531 18:18:38.788273       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-gt5gx"
	I0531 18:18:39.482045       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:18:39.489921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:18:39.495961       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 18:18:39.497592       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.499008       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.503772       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.503827       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.507371       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.510742       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.510754       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:18:39.510874       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.510891       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.518065       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.518173       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.518689       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.518707       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.582440       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.582476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:18:39.583991       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:18:39.584042       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:18:39.627439       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-fbr7m"
	I0531 18:18:39.691095       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-skg77"
	E0531 18:19:33.149890       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:19:33.153971       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9d0cb6e145a0] <==
	* I0531 18:18:37.259158       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:18:37.259212       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:18:37.259253       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:18:37.275962       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:18:37.276004       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:18:37.276012       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:18:37.276030       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:18:37.276301       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:18:37.278131       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:18:37.278166       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:18:37.278238       1 config.go:317] "Starting service config controller"
	I0531 18:18:37.278242       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:18:37.378950       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:18:37.378964       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [084ed714bf4b] <==
	* W0531 18:18:20.896371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:18:20.896380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:18:20.896798       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:20.896830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:20.897069       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:20.897099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:20.897366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:18:20.897427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:18:20.897498       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:18:20.897527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:18:20.897500       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:18:20.897536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:18:20.898282       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:18:20.898325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:18:21.705018       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:18:21.705064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:18:21.818673       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:18:21.818710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:18:21.851763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:18:21.851812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 18:18:21.946785       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:18:21.946895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:18:22.035978       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:18:22.036050       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:18:25.290125       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:13:12 UTC, end at Tue 2022-05-31 18:19:41 UTC. --
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644501    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/60efb3f7-78cf-4254-94bf-7e679c8cd8f9-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-fbr7m\" (UID: \"60efb3f7-78cf-4254-94bf-7e679c8cd8f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fbr7m"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644534    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9dp\" (UniqueName: \"kubernetes.io/projected/60efb3f7-78cf-4254-94bf-7e679c8cd8f9-kube-api-access-fc9dp\") pod \"kubernetes-dashboard-8469778f77-fbr7m\" (UID: \"60efb3f7-78cf-4254-94bf-7e679c8cd8f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fbr7m"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644562    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xngm\" (UniqueName: \"kubernetes.io/projected/84ce306e-102d-4757-8cc7-fb6002c68aeb-kube-api-access-5xngm\") pod \"metrics-server-b955d9d8-gt5gx\" (UID: \"84ce306e-102d-4757-8cc7-fb6002c68aeb\") " pod="kube-system/metrics-server-b955d9d8-gt5gx"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644583    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd84ae59-1311-4da9-b670-3127bcdf000a-tmp\") pod \"storage-provisioner\" (UID: \"bd84ae59-1311-4da9-b670-3127bcdf000a\") " pod="kube-system/storage-provisioner"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644600    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2tf\" (UniqueName: \"kubernetes.io/projected/bd84ae59-1311-4da9-b670-3127bcdf000a-kube-api-access-zj2tf\") pod \"storage-provisioner\" (UID: \"bd84ae59-1311-4da9-b670-3127bcdf000a\") " pod="kube-system/storage-provisioner"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644616    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dbc512a-0afd-4296-85f4-85c63277d4cb-lib-modules\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644664    7224 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxwg\" (UniqueName: \"kubernetes.io/projected/8dbc512a-0afd-4296-85f4-85c63277d4cb-kube-api-access-9qxwg\") pod \"kube-proxy-s8lnd\" (UID: \"8dbc512a-0afd-4296-85f4-85c63277d4cb\") " pod="kube-system/kube-proxy-s8lnd"
	May 31 18:19:34 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:34.644676    7224 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:19:35 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:35.791011    7224 request.go:665] Waited for 1.101159923s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 31 18:19:35 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:35.855354    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.049043    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.224157    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220531111208-2169\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:36.434360    7224 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220531111208-2169\" already exists" pod="kube-system/etcd-embed-certs-20220531111208-2169"
	May 31 18:19:36 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:36.996715    7224 scope.go:110] "RemoveContainer" containerID="8a199e23fbe54b7ba0237510d7f878dfdc5d2825b2a7779d67538dbeb77ee04c"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:37.709892    7224 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77 through plugin: invalid network status for"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:37.714267    7224 scope.go:110] "RemoveContainer" containerID="8a199e23fbe54b7ba0237510d7f878dfdc5d2825b2a7779d67538dbeb77ee04c"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:37.714554    7224 scope.go:110] "RemoveContainer" containerID="9892f0e49eec51dce0704bdc055b7323cdff0f87a344cd00856715b4e6a98665"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:37.714791    7224 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-skg77_kubernetes-dashboard(1b59e57a-d64d-4c49-bd89-ed3411f5a673)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77" podUID=1b59e57a-d64d-4c49-bd89-ed3411f5a673
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:37.920284    7224 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:37.920336    7224 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:37.920448    7224 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xngm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-gt5gx_kube-system(84ce306e-102d-4757-8cc7-fb6002c68aeb): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 18:19:37 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:37.920497    7224 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-gt5gx" podUID=84ce306e-102d-4757-8cc7-fb6002c68aeb
	May 31 18:19:38 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:38.719761    7224 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77 through plugin: invalid network status for"
	May 31 18:19:38 embed-certs-20220531111208-2169 kubelet[7224]: I0531 18:19:38.801587    7224 scope.go:110] "RemoveContainer" containerID="9892f0e49eec51dce0704bdc055b7323cdff0f87a344cd00856715b4e6a98665"
	May 31 18:19:38 embed-certs-20220531111208-2169 kubelet[7224]: E0531 18:19:38.801844    7224 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-skg77_kubernetes-dashboard(1b59e57a-d64d-4c49-bd89-ed3411f5a673)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-skg77" podUID=1b59e57a-d64d-4c49-bd89-ed3411f5a673
	
	* 
	* ==> kubernetes-dashboard [1c0d799a5079] <==
	* 2022/05/31 18:18:46 Using namespace: kubernetes-dashboard
	2022/05/31 18:18:46 Using in-cluster config to connect to apiserver
	2022/05/31 18:18:46 Using secret token for csrf signing
	2022/05/31 18:18:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:18:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:18:46 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:18:46 Generating JWE encryption key
	2022/05/31 18:18:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:18:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:18:47 Initializing JWE encryption key from synchronized object
	2022/05/31 18:18:47 Creating in-cluster Sidecar client
	2022/05/31 18:18:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:18:47 Serving insecurely on HTTP port: 9090
	2022/05/31 18:19:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:18:46 Starting overwatch
	
	* 
	* ==> storage-provisioner [41a4ffb4010c] <==
	* I0531 18:18:39.493352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:18:39.511483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:18:39.511818       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:18:39.520358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:18:39.520601       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1!
	I0531 18:18:39.578804       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d97590e-4964-4b19-98b4-71484d3bd1e1", APIVersion:"v1", ResourceVersion:"535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1 became leader
	I0531 18:18:39.621659       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531111208-2169_a854b820-5dfd-4b75-80c6-6d7188824ec1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-gt5gx
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx: exit status 1 (280.828502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-gt5gx" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531111208-2169 describe pod metrics-server-b955d9d8-gt5gx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:26:57.111812    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:28:03.050593    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:28:35.730294    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:28:57.917193    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:29:00.025017    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:29:08.510937    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:30:14.677671    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:30:28.423657    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:30:29.186971    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.193448    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.205684    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.227898    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.270153    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.351613    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.512564    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:29.833325    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:30.475592    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:31.757858    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
E0531 11:30:34.318031    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:30:39.438149    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:30:49.680289    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:31:02.886025    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:31:10.162347    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:31:33.967275    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:31:51.124174    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:31:57.108154    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0531 11:32:27.912122    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:51937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:33:03.062529    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:33:13.060423    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:33:35.743949    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:33:46.316358    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:33:57.930196    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:34:00.038033    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:34:08.523094    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:34:26.124900    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:34:39.843184    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 11:35:14.690963    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:289: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (425.201756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: status error: exit status 2 (may be ok)
start_stop_delete_test.go:289: "old-k8s-version-20220531110241-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220531110241-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220531110241-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.808µs)
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220531110241-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531110241-2169
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531110241-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815",
	        "Created": "2022-05-31T18:02:47.387078025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:08:26.190082098Z",
	            "FinishedAt": "2022-05-31T18:08:23.336567271Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hostname",
	        "HostsPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/hosts",
	        "LogPath": "/var/lib/docker/containers/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815/df301a213db61cf66cc0970233a18eb6a72386f20393b94f3ff55ff8b0bc8815-json.log",
	        "Name": "/old-k8s-version-20220531110241-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531110241-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531110241-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f59e64d30cdaac3826823dee1bc788379af982abab3f34f979d0f9184f93428/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531110241-2169",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531110241-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531110241-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531110241-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "49bd121b76d28de5c01cec5b2b9b781e9e3115310e778c754e0a43752d617ff2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/49bd121b76d2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531110241-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "df301a213db6",
	                        "old-k8s-version-20220531110241-2169"
	                    ],
	                    "NetworkID": "371f88932f2f86b1e4c7d7ee4813eb521c132449a1b646e6adc62c4e1df95fe6",
	                    "EndpointID": "4a1e8f65e10d901150ca70abb003401b842c1eb5fb0be5bb24a9c98ec896642f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (424.393376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220531110241-2169 logs -n 25: (3.512629938s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169                        | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531112729-2169                             | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531112729-2169                             | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:28:20
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:28:20.578170   14601 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:28:20.578344   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578349   14601 out.go:309] Setting ErrFile to fd 2...
	I0531 11:28:20.578353   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578450   14601 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:28:20.578728   14601 out.go:303] Setting JSON to false
	I0531 11:28:20.593905   14601 start.go:115] hostinfo: {"hostname":"37309.local","uptime":5269,"bootTime":1654016431,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:28:20.594000   14601 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:28:20.616053   14601 out.go:177] * [newest-cni-20220531112729-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:28:20.657488   14601 notify.go:193] Checking for updates...
	I0531 11:28:20.678853   14601 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:28:20.700919   14601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:20.721904   14601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:28:20.744090   14601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:28:20.766040   14601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:28:20.788411   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:20.789066   14601 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:28:20.860555   14601 docker.go:137] docker version: linux-20.10.14
	I0531 11:28:20.860683   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:20.986144   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:20.919865429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.008666   14601 out.go:177] * Using the docker driver based on existing profile
	I0531 11:28:21.030286   14601 start.go:284] selected driver: docker
	I0531 11:28:21.030310   14601 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.030459   14601 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:28:21.033857   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:21.157996   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:21.093562365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.158200   14601 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 11:28:21.158219   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:21.158227   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:21.158238   14601 start_flags.go:306] config:
	{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.201959   14601 out.go:177] * Starting control plane node newest-cni-20220531112729-2169 in cluster newest-cni-20220531112729-2169
	I0531 11:28:21.224000   14601 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:28:21.245702   14601 out.go:177] * Pulling base image ...
	I0531 11:28:21.287911   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:21.287945   14601 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:28:21.288004   14601 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:28:21.288030   14601 cache.go:57] Caching tarball of preloaded images
	I0531 11:28:21.288213   14601 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:28:21.288233   14601 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:28:21.289361   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.352247   14601 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:28:21.352265   14601 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:28:21.352276   14601 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:28:21.352353   14601 start.go:352] acquiring machines lock for newest-cni-20220531112729-2169: {Name:mk223b02c8d18fd8125fc1aec4677c6b6e6ebb27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:28:21.352426   14601 start.go:356] acquired machines lock for "newest-cni-20220531112729-2169" in 55.579µs
	I0531 11:28:21.352446   14601 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:28:21.352452   14601 fix.go:55] fixHost starting: 
	I0531 11:28:21.352679   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.419750   14601 fix.go:103] recreateIfNeeded on newest-cni-20220531112729-2169: state=Stopped err=<nil>
	W0531 11:28:21.419776   14601 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:28:21.441735   14601 out.go:177] * Restarting existing docker container for "newest-cni-20220531112729-2169" ...
	I0531 11:28:21.463797   14601 cli_runner.go:164] Run: docker start newest-cni-20220531112729-2169
	I0531 11:28:21.814105   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.885658   14601 kic.go:416] container "newest-cni-20220531112729-2169" state is running.
	I0531 11:28:21.886206   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:21.959677   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.960076   14601 machine.go:88] provisioning docker machine ...
	I0531 11:28:21.960098   14601 ubuntu.go:169] provisioning hostname "newest-cni-20220531112729-2169"
	I0531 11:28:21.960175   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.032376   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.032565   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.032580   14601 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531112729-2169 && echo "newest-cni-20220531112729-2169" | sudo tee /etc/hostname
	I0531 11:28:22.157035   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531112729-2169
	
	I0531 11:28:22.157115   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.228089   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.228232   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.228247   14601 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531112729-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531112729-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531112729-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:28:22.339894   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:22.339918   14601 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:28:22.339945   14601 ubuntu.go:177] setting up certificates
	I0531 11:28:22.339961   14601 provision.go:83] configureAuth start
	I0531 11:28:22.340025   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:22.411484   14601 provision.go:138] copyHostCerts
	I0531 11:28:22.411577   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:28:22.411587   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:28:22.411674   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:28:22.411878   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:28:22.411888   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:28:22.411944   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:28:22.412077   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:28:22.412083   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:28:22.412138   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:28:22.412247   14601 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531112729-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531112729-2169]
	I0531 11:28:22.494505   14601 provision.go:172] copyRemoteCerts
	I0531 11:28:22.494581   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:28:22.494633   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.566548   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:22.647934   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:28:22.667800   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:28:22.686536   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:28:22.706691   14601 provision.go:86] duration metric: configureAuth took 366.717286ms
	I0531 11:28:22.706707   14601 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:28:22.706872   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:22.706929   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.778451   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.778594   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.778608   14601 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:28:22.890950   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:28:22.890970   14601 ubuntu.go:71] root file system type: overlay
	I0531 11:28:22.891144   14601 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:28:22.891228   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.963222   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.963394   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.963443   14601 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:28:23.086363   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:28:23.086459   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.156610   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:23.156758   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:23.156784   14601 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:28:23.275544   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:23.275558   14601 machine.go:91] provisioned docker machine in 1.315489714s
	I0531 11:28:23.275564   14601 start.go:306] post-start starting for "newest-cni-20220531112729-2169" (driver="docker")
	I0531 11:28:23.275568   14601 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:28:23.275635   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:28:23.275687   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.345259   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.429063   14601 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:28:23.432446   14601 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:28:23.432461   14601 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:28:23.432468   14601 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:28:23.432475   14601 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:28:23.432482   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:28:23.432621   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:28:23.432759   14601 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:28:23.432905   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:28:23.439726   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:23.456662   14601 start.go:309] post-start completed in 181.091671ms
	I0531 11:28:23.456739   14601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:28:23.456786   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.526742   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.605692   14601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:28:23.610546   14601 fix.go:57] fixHost completed within 2.258116486s
	I0531 11:28:23.610566   14601 start.go:81] releasing machines lock for "newest-cni-20220531112729-2169", held for 2.258159111s
	I0531 11:28:23.610672   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:23.680718   14601 ssh_runner.go:195] Run: systemctl --version
	I0531 11:28:23.680719   14601 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:28:23.680772   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.680795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.754229   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.757054   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.836240   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:28:23.968614   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:23.978398   14601 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:28:23.978455   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:28:23.987743   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:28:24.000522   14601 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:28:24.067960   14601 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:28:24.135864   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:24.145523   14601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:28:24.212934   14601 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:28:24.222595   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.257967   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.335762   14601 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:28:24.335888   14601 cli_runner.go:164] Run: docker exec -t newest-cni-20220531112729-2169 dig +short host.docker.internal
	I0531 11:28:24.460335   14601 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:28:24.460445   14601 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:28:24.464822   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.475293   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:24.566562   14601 out.go:177]   - kubelet.network-plugin=cni
	I0531 11:28:24.588831   14601 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 11:28:24.610772   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:24.610916   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.642456   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.642471   14601 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:28:24.642549   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.671595   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.671615   14601 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:28:24.671707   14601 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:28:24.745107   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:24.745118   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:24.745131   14601 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 11:28:24.745142   14601 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531112729-2169 NodeName:newest-cni-20220531112729-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:28:24.745273   14601 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220531112729-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:28:24.745338   14601 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531112729-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:28:24.745395   14601 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:28:24.752959   14601 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:28:24.753032   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:28:24.759894   14601 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0531 11:28:24.772209   14601 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:28:24.784449   14601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0531 11:28:24.796924   14601 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:28:24.800433   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.809821   14601 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169 for IP: 192.168.58.2
	I0531 11:28:24.809929   14601 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:28:24.810011   14601 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:28:24.810092   14601 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/client.key
	I0531 11:28:24.810156   14601 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key.cee25041
	I0531 11:28:24.810205   14601 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key
	I0531 11:28:24.810423   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:28:24.810461   14601 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:28:24.810473   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:28:24.810508   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:28:24.810539   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:28:24.810574   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:28:24.810635   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:24.811155   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:28:24.827721   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:28:24.844468   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:28:24.861175   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:28:24.878420   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:28:24.896393   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:28:24.913732   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:28:24.930651   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:28:24.947273   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:28:24.963888   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:28:24.980969   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:28:24.998182   14601 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:28:25.010512   14601 ssh_runner.go:195] Run: openssl version
	I0531 11:28:25.015678   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:28:25.023240   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026950   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026984   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.032002   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:28:25.039256   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:28:25.046930   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050635   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050678   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.055739   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:28:25.062867   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:28:25.070401   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074092   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074134   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.079290   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:28:25.086508   14601 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:25.086608   14601 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:25.115424   14601 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:28:25.123088   14601 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:28:25.123106   14601 kubeadm.go:626] restartCluster start
	I0531 11:28:25.123166   14601 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:28:25.130286   14601 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.130356   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:25.201430   14601 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531112729-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:25.201614   14601 kubeconfig.go:127] "newest-cni-20220531112729-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:28:25.202983   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:25.204253   14601 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:28:25.211900   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.211944   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.220060   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.422181   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.422379   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.433780   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.620174   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.620309   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.632389   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.820588   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.820750   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.831459   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.022339   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.022452   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.032812   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.222206   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.222338   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.233015   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.422196   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.422320   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.432962   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.620439   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.620525   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.629706   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.820697   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.820806   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.831264   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.022187   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.022343   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.032905   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.220251   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.220391   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.230734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.420963   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.421067   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.430383   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.621762   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.621857   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.632734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.820676   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.820772   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.831490   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.022170   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.022312   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.033000   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.221452   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.221582   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.232311   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.232323   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.232368   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.240337   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.240351   14601 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:28:28.240362   14601 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:28:28.240419   14601 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:28.270566   14601 docker.go:442] Stopping containers: [b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5]
	I0531 11:28:28.270636   14601 ssh_runner.go:195] Run: docker stop b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5
	I0531 11:28:28.300361   14601 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:28:28.310648   14601 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:28:28.318184   14601 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 18:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 18:27 /etc/kubernetes/scheduler.conf
	
	I0531 11:28:28.318233   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:28:28.325358   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:28:28.332511   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.339617   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.339668   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.346531   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:28:28.353553   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.353596   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:28:28.360560   14601 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367876   14601 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367886   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:28.411595   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.400253   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.531124   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.579235   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.634030   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:29.634095   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.148677   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.646607   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.665446   14601 api_server.go:71] duration metric: took 1.031430401s to wait for apiserver process to appear ...
	I0531 11:28:30.665473   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:30.665491   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:30.666674   14601 api_server.go:256] stopped: https://127.0.0.1:55181/healthz: Get "https://127.0.0.1:55181/healthz": EOF
	I0531 11:28:31.168738   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.658657   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.658673   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:33.667080   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.676399   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.676422   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:34.166979   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.174133   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.174146   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:34.667060   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.672889   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.672907   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:35.166970   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.172997   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.179538   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.179550   14601 api_server.go:130] duration metric: took 4.514120757s to wait for apiserver health ...
	I0531 11:28:35.179559   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:35.179568   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:35.179579   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.186211   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.186226   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.186231   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.186234   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.186238   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running
	I0531 11:28:35.186244   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.186249   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.186256   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.186260   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.186263   14601 system_pods.go:74] duration metric: took 6.680457ms to wait for pod list to return data ...
	I0531 11:28:35.186268   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.188933   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.188950   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.188962   14601 node_conditions.go:105] duration metric: took 2.690302ms to run NodePressure ...
	I0531 11:28:35.188973   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:35.352632   14601 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:28:35.361125   14601 ops.go:34] apiserver oom_adj: -16
	I0531 11:28:35.361143   14601 kubeadm.go:630] restartCluster took 10.238154537s
	I0531 11:28:35.361151   14601 kubeadm.go:397] StartCluster complete in 10.274772238s
	I0531 11:28:35.361170   14601 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.361244   14601 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:35.361875   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.364955   14601 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531112729-2169" rescaled to 1
	I0531 11:28:35.364987   14601 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:28:35.441880   14601 out.go:177] * Verifying Kubernetes components...
	I0531 11:28:35.365003   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:28:35.365025   14601 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:28:35.365144   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:35.442135   14601 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.442144   14601 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479754   14601 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479769   14601 addons.go:165] addon metrics-server should already be in state true
	I0531 11:28:35.479783   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:28:35.479760   14601 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479821   14601 addons.go:165] addon dashboard should already be in state true
	I0531 11:28:35.479822   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442127   14601 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479852   14601 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.479855   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442146   14601 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531112729-2169"
	W0531 11:28:35.479865   14601 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:28:35.479883   14601 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531112729-2169"
	I0531 11:28:35.479911   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.480183   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480218   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480300   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.481040   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.525057   14601 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 11:28:35.525155   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.676584   14601 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.624235   14601 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.639696   14601 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:28:35.713827   14601 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0531 11:28:35.676634   14601 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:28:35.731542   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:35.751936   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.752094   14601 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:35.811040   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:28:35.849003   14601 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.811081   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:35.811141   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:28:35.811765   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.849157   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886646   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:28:35.886796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:28:35.886795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886823   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:28:35.886947   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.907168   14601 api_server.go:71] duration metric: took 542.159545ms to wait for apiserver process to appear ...
	I0531 11:28:35.907219   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:35.907267   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.920781   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.923207   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.923240   14601 api_server.go:130] duration metric: took 16.012254ms to wait for apiserver health ...
	I0531 11:28:35.923248   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.933658   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.933689   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.933698   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.933710   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.933728   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:28:35.933736   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.933747   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.933759   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.933779   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.933786   14601 system_pods.go:74] duration metric: took 10.533198ms to wait for pod list to return data ...
	I0531 11:28:35.933792   14601 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:28:35.938145   14601 default_sa.go:45] found service account: "default"
	I0531 11:28:35.938165   14601 default_sa.go:55] duration metric: took 4.366593ms for default service account to be created ...
	I0531 11:28:35.938197   14601 kubeadm.go:572] duration metric: took 573.199171ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 11:28:35.938223   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.942426   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.942450   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.942465   14601 node_conditions.go:105] duration metric: took 4.236351ms to run NodePressure ...
	I0531 11:28:35.942485   14601 start.go:213] waiting for startup goroutines ...
	I0531 11:28:36.012965   14601 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.012980   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:28:36.013037   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:36.013049   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.013580   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.015074   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.092243   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.148273   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:36.245789   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:28:36.245817   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:28:36.247745   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:28:36.247758   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:28:36.345201   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:28:36.345217   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:28:36.345894   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.348010   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:28:36.348023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:28:36.433009   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:28:36.433023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:28:36.436199   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.436215   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:28:36.458750   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:28:36.458764   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:28:36.460817   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.555796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:28:36.555811   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:28:36.660576   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:28:36.660591   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:28:36.746397   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:28:36.746413   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:28:36.762642   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:28:36.762659   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:28:36.779687   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:36.779700   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:28:36.851105   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:37.356022   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.207732118s)
	I0531 11:28:37.356099   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010190378s)
	I0531 11:28:37.447297   14601 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531112729-2169"
	I0531 11:28:37.650818   14601 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:28:37.709272   14601 addons.go:417] enableAddons completed in 2.34427737s
	I0531 11:28:37.742397   14601 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:28:37.763847   14601 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531112729-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:35:25 UTC. --
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Starting Docker Application Container Engine...
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.442700177Z" level=info msg="Starting up"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445540309Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445580709Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445602670Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.445613401Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447324824Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447356391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447369067Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.447375179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.454861167Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.459158936Z" level=info msg="Loading containers: start."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.541211721Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.574193816Z" level=info msg="Loading containers: done."
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582853381Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.582916167Z" level=info msg="Daemon has completed initialization"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.603971346Z" level=info msg="API listen on [::]:2376"
	May 31 18:08:26 old-k8s-version-20220531110241-2169 dockerd[127]: time="2022-05-31T18:08:26.609838771Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-05-31T18:35:27Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:35:27 up  1:23,  0 users,  load average: 0.31, 0.66, 0.90
	Linux old-k8s-version-20220531110241-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:08:26 UTC, end at Tue 2022-05-31 18:35:27 UTC. --
	May 31 18:35:24 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: I0531 18:35:26.394167   34033 server.go:410] Version: v1.16.0
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: I0531 18:35:26.394376   34033 plugins.go:100] No cloud provider specified.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: I0531 18:35:26.394387   34033 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: I0531 18:35:26.396107   34033 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: W0531 18:35:26.396815   34033 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: W0531 18:35:26.396886   34033 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:35:26 old-k8s-version-20220531110241-2169 kubelet[34033]: F0531 18:35:26.396910   34033 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:35:26 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:35:26 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: I0531 18:35:27.127384   34045 server.go:410] Version: v1.16.0
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: I0531 18:35:27.127657   34045 plugins.go:100] No cloud provider specified.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: I0531 18:35:27.127672   34045 server.go:773] Client rotation is on, will bootstrap in background
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: I0531 18:35:27.129231   34045 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: W0531 18:35:27.129814   34045 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: W0531 18:35:27.129875   34045 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	May 31 18:35:27 old-k8s-version-20220531110241-2169 kubelet[34045]: F0531 18:35:27.129901   34045 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	May 31 18:35:27 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 31 18:35:27 old-k8s-version-20220531110241-2169 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 11:35:27.474510   14974 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 2 (425.537952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220531110241-2169" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220531111947-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169: exit status 2 (16.096736594s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
E0531 11:27:03.136808    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169: exit status 2 (16.103213635s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220531111947-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531111947-2169
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531111947-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901",
	        "Created": "2022-05-31T18:19:53.754328959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254580,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:20:54.210493022Z",
	            "FinishedAt": "2022-05-31T18:20:52.286893922Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/hostname",
	        "HostsPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/hosts",
	        "LogPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901-json.log",
	        "Name": "/default-k8s-different-port-20220531111947-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531111947-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531111947-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531111947-2169",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531111947-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531111947-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531111947-2169",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531111947-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625190a53b984200ac4c4136adfbae8f8188de966ffdbd8935d4eba14b515e91",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53878"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53879"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53880"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/625190a53b98",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531111947-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2126d010e596",
	                        "default-k8s-different-port-20220531111947-2169"
	                    ],
	                    "NetworkID": "edbf55a2a15ca8d0c53f946fc87d4d604387c6b971b5f4b18e149d39e0a8f4e3",
	                    "EndpointID": "ef1a715e30ff0ba626cd15ad1356e4341a49632551d71ff19cbd6bf89d5dd6bc",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220531111947-2169 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220531111947-2169 logs -n 25: (2.807061175s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:16 PDT | 31 May 22 11:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220531111946-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | disable-driver-mounts-20220531111946-2169         |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:20:52
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:20:52.944881   14088 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:20:52.945084   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945089   14088 out.go:309] Setting ErrFile to fd 2...
	I0531 11:20:52.945093   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945194   14088 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:20:52.945466   14088 out.go:303] Setting JSON to false
	I0531 11:20:52.960339   14088 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4821,"bootTime":1654016431,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:20:52.960440   14088 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:20:52.982638   14088 out.go:177] * [default-k8s-different-port-20220531111947-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:20:53.025482   14088 notify.go:193] Checking for updates...
	I0531 11:20:53.047412   14088 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:20:53.069297   14088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:53.090403   14088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:20:53.112640   14088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:20:53.134605   14088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:20:53.156922   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:53.157647   14088 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:20:53.231919   14088 docker.go:137] docker version: linux-20.10.14
	I0531 11:20:53.232051   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.359110   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.293756437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.402586   14088 out.go:177] * Using the docker driver based on existing profile
	I0531 11:20:53.424356   14088 start.go:284] selected driver: docker
	I0531 11:20:53.424384   14088 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.424528   14088 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:20:53.427949   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.551765   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.48889853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.551941   14088 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:20:53.551960   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:53.551966   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:53.551973   14088 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.574240   14088 out.go:177] * Starting control plane node default-k8s-different-port-20220531111947-2169 in cluster default-k8s-different-port-20220531111947-2169
	I0531 11:20:53.595811   14088 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:20:53.617672   14088 out.go:177] * Pulling base image ...
	I0531 11:20:53.660942   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:53.661017   14088 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:20:53.661021   14088 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:20:53.661045   14088 cache.go:57] Caching tarball of preloaded images
	I0531 11:20:53.661255   14088 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:20:53.661288   14088 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:20:53.662334   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:53.728191   14088 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:20:53.728208   14088 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:20:53.728219   14088 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:20:53.728284   14088 start.go:352] acquiring machines lock for default-k8s-different-port-20220531111947-2169: {Name:mk78e9fe98c6a3e232878ce765bd193e5b506828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:20:53.728368   14088 start.go:356] acquired machines lock for "default-k8s-different-port-20220531111947-2169" in 55.664µs
	I0531 11:20:53.728390   14088 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:20:53.728397   14088 fix.go:55] fixHost starting: 
	I0531 11:20:53.728613   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:53.795533   14088 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531111947-2169: state=Stopped err=<nil>
	W0531 11:20:53.795566   14088 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:20:53.839440   14088 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531111947-2169" ...
	I0531 11:20:53.861504   14088 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.214277   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:54.285676   14088 kic.go:416] container "default-k8s-different-port-20220531111947-2169" state is running.
	I0531 11:20:54.286268   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.359103   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:54.359483   14088 machine.go:88] provisioning docker machine ...
	I0531 11:20:54.359511   14088 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531111947-2169"
	I0531 11:20:54.359571   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.431991   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.432193   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.432206   14088 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531111947-2169 && echo "default-k8s-different-port-20220531111947-2169" | sudo tee /etc/hostname
	I0531 11:20:54.553685   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531111947-2169
	
	I0531 11:20:54.553769   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.625847   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.625998   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.626013   14088 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531111947-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531111947-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531111947-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:20:54.740939   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:54.740960   14088 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:20:54.740983   14088 ubuntu.go:177] setting up certificates
	I0531 11:20:54.740993   14088 provision.go:83] configureAuth start
	I0531 11:20:54.741060   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.813502   14088 provision.go:138] copyHostCerts
	I0531 11:20:54.813586   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:20:54.813597   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:20:54.813681   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:20:54.813909   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:20:54.813929   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:20:54.813988   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:20:54.814120   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:20:54.814127   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:20:54.814187   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:20:54.814303   14088 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531111947-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531111947-2169]
	I0531 11:20:54.984093   14088 provision.go:172] copyRemoteCerts
	I0531 11:20:54.984161   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:20:54.984204   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.054898   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.140792   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:20:55.157975   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 11:20:55.174955   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:20:55.192282   14088 provision.go:86] duration metric: configureAuth took 451.28007ms
	I0531 11:20:55.192295   14088 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:20:55.192463   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:55.192523   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.261854   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.262008   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.262018   14088 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:20:55.374013   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:20:55.374024   14088 ubuntu.go:71] root file system type: overlay
	I0531 11:20:55.374182   14088 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:20:55.374259   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.444497   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.444646   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.444717   14088 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:20:55.566811   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:20:55.566903   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.637162   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.637315   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.637331   14088 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:20:55.756943   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:55.756956   14088 machine.go:91] provisioned docker machine in 1.397481881s
	I0531 11:20:55.756966   14088 start.go:306] post-start starting for "default-k8s-different-port-20220531111947-2169" (driver="docker")
	I0531 11:20:55.756972   14088 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:20:55.757026   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:20:55.757069   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.826937   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.911267   14088 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:20:55.914903   14088 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:20:55.914917   14088 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:20:55.914925   14088 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:20:55.914929   14088 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:20:55.914937   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:20:55.915031   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:20:55.915160   14088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:20:55.915312   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:20:55.922282   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:55.940097   14088 start.go:309] post-start completed in 183.123925ms
	I0531 11:20:55.940169   14088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:20:55.940229   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.011052   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.090867   14088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:20:56.095336   14088 fix.go:57] fixHost completed within 2.36696395s
	I0531 11:20:56.095355   14088 start.go:81] releasing machines lock for "default-k8s-different-port-20220531111947-2169", held for 2.367007248s
	I0531 11:20:56.095435   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165619   14088 ssh_runner.go:195] Run: systemctl --version
	I0531 11:20:56.165622   14088 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:20:56.165682   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165696   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.241266   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.242997   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.324128   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:20:56.452142   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.462221   14088 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:20:56.462271   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:20:56.472988   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:20:56.486894   14088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:20:56.552509   14088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:20:56.624892   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.634564   14088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:20:56.697617   14088 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:20:56.707335   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.742123   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.823696   14088 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:20:56.823896   14088 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220531111947-2169 dig +short host.docker.internal
	I0531 11:20:56.945635   14088 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:20:56.945757   14088 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:20:56.950017   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:56.959956   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.030149   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:57.030228   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.061515   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.061532   14088 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:20:57.061601   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.092363   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.092379   14088 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:20:57.092456   14088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:20:57.165549   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:57.165561   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:57.165578   14088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:20:57.165607   14088 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531111947-2169 NodeName:default-k8s-different-port-20220531111947-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:20:57.165740   14088 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220531111947-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:20:57.165815   14088 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220531111947-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 11:20:57.165869   14088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:20:57.174125   14088 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:20:57.174180   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:20:57.181428   14088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0531 11:20:57.193769   14088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:20:57.206426   14088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0531 11:20:57.218608   14088 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:20:57.222399   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:57.231856   14088 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169 for IP: 192.168.58.2
	I0531 11:20:57.231960   14088 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:20:57.232024   14088 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:20:57.232114   14088 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.key
	I0531 11:20:57.232170   14088 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key.cee25041
	I0531 11:20:57.232221   14088 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key
	I0531 11:20:57.232425   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:20:57.232955   14088 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:20:57.232980   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:20:57.233064   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:20:57.233187   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:20:57.233279   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:20:57.233411   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:57.234195   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:20:57.251232   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:20:57.268133   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:20:57.284962   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:20:57.302399   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:20:57.319341   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:20:57.336647   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:20:57.353997   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:20:57.370819   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:20:57.388311   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:20:57.405153   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:20:57.422579   14088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:20:57.435885   14088 ssh_runner.go:195] Run: openssl version
	I0531 11:20:57.441459   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:20:57.449337   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453299   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453340   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.458858   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:20:57.467841   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:20:57.476418   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480355   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480411   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.485744   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:20:57.493027   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:20:57.500863   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.504963   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.505012   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.510223   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:20:57.517409   14088 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:57.517502   14088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:20:57.546434   14088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:20:57.554470   14088 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:20:57.554485   14088 kubeadm.go:626] restartCluster start
	I0531 11:20:57.554529   14088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:20:57.561236   14088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.561291   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.632105   14088 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531111947-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:57.632282   14088 kubeconfig.go:127] "default-k8s-different-port-20220531111947-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:20:57.632611   14088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:20:57.633766   14088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:20:57.641261   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.641314   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.649587   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.851170   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.851374   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.861667   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.049817   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.049887   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.059064   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.250141   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.250267   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.260725   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.449710   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.449793   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.459510   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.651089   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.651214   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.661996   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.851729   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.851819   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.861332   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.051735   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.051889   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.063335   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.250489   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.250612   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.261366   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.451746   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.451884   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.461795   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.651683   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.651840   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.662763   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.851738   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.851863   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.862352   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.051822   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.051919   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.060856   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.251144   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.251295   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.262356   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.450234   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.450389   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.460742   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.650557   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.650686   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.661258   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.661270   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.661321   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.670223   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.670238   14088 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:21:00.670248   14088 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:21:00.670310   14088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:21:00.699869   14088 docker.go:442] Stopping containers: [b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f]
	I0531 11:21:00.699944   14088 ssh_runner.go:195] Run: docker stop b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f
	I0531 11:21:00.731543   14088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:21:00.744008   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:21:00.751326   14088 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 18:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:20 /etc/kubernetes/scheduler.conf
	
	I0531 11:21:00.751370   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 11:21:00.758546   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 11:21:00.765681   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.772792   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.772846   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.779694   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 11:21:00.786641   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.786689   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:21:00.793632   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.800995   14088 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.801006   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:00.845457   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.402938   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.519525   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.564444   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.614093   14088 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:21:01.614155   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.126101   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.624157   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.670383   14088 api_server.go:71] duration metric: took 1.056305017s to wait for apiserver process to appear ...
	I0531 11:21:02.670406   14088 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:21:02.670419   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:02.671578   14088 api_server.go:256] stopped: https://127.0.0.1:53880/healthz: Get "https://127.0.0.1:53880/healthz": EOF
	I0531 11:21:03.172114   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.213534   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:21:05.213551   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:21:05.671899   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.679000   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:05.679016   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.171614   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.177304   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:06.177318   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.671709   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.677584   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 200:
	ok
	I0531 11:21:06.684407   14088 api_server.go:140] control plane version: v1.23.6
	I0531 11:21:06.684421   14088 api_server.go:130] duration metric: took 4.014058772s to wait for apiserver health ...
	I0531 11:21:06.684426   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:21:06.684430   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:21:06.684440   14088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:21:06.692117   14088 system_pods.go:59] 8 kube-system pods found
	I0531 11:21:06.692131   14088 system_pods.go:61] "coredns-64897985d-hw9jj" [a99971df-076d-4aba-a217-a2a75c87a745] Running
	I0531 11:21:06.692141   14088 system_pods.go:61] "etcd-default-k8s-different-port-20220531111947-2169" [297b8c39-20c3-4101-878e-1fab3854f875] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 11:21:06.692146   14088 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [d3af2377-33bb-4d77-873c-bf4d620b1ccc] Running
	I0531 11:21:06.692152   14088 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [3f64c0cd-80e0-4f01-b61c-62d6914342cc] Running
	I0531 11:21:06.692156   14088 system_pods.go:61] "kube-proxy-4ljp8" [b5ef4698-6857-48cc-828a-26043bc6f05f] Running
	I0531 11:21:06.692159   14088 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [8efaf333-e7f0-4eb4-ace3-68210d3b9d66] Running
	I0531 11:21:06.692166   14088 system_pods.go:61] "metrics-server-b955d9d8-dj4pb" [837a7b7e-0528-4b97-af67-3dab5106f2a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:21:06.692172   14088 system_pods.go:61] "storage-provisioner" [45148f19-69b5-4e40-a3e5-284bafef13b2] Running
	I0531 11:21:06.692175   14088 system_pods.go:74] duration metric: took 7.732726ms to wait for pod list to return data ...
	I0531 11:21:06.692181   14088 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:21:06.695801   14088 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:21:06.695817   14088 node_conditions.go:123] node cpu capacity is 6
	I0531 11:21:06.695829   14088 node_conditions.go:105] duration metric: took 3.644789ms to run NodePressure ...
	I0531 11:21:06.695842   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:06.831589   14088 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835556   14088 kubeadm.go:777] kubelet initialised
	I0531 11:21:06.835567   14088 kubeadm.go:778] duration metric: took 3.96448ms waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835577   14088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:21:06.841061   14088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857619   14088 pod_ready.go:92] pod "coredns-64897985d-hw9jj" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:06.857637   14088 pod_ready.go:81] duration metric: took 16.555578ms waiting for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857647   14088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:08.871386   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:11.369552   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:13.868531   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:14.869966   14088 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:14.869979   14088 pod_ready.go:81] duration metric: took 8.012423735s waiting for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:14.869987   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:16.883059   14088 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:18.883566   14088 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.883577   14088 pod_ready.go:81] duration metric: took 4.013634265s waiting for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.883584   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888077   14088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.888084   14088 pod_ready.go:81] duration metric: took 4.485217ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888090   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892647   14088 pod_ready.go:92] pod "kube-proxy-4ljp8" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.892655   14088 pod_ready.go:81] duration metric: took 4.561071ms waiting for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892661   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896699   14088 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.896707   14088 pod_ready.go:81] duration metric: took 4.041445ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896713   14088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:20.908715   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:23.409540   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:25.909327   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:28.408295   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:30.409483   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:32.411745   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:34.911779   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:37.408629   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:39.412518   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:41.908120   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:43.908385   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:45.910144   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:48.411558   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:50.907773   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:52.911197   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:55.409130   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:57.909519   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:00.411176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:02.907272   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:04.911276   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:07.408653   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:09.909908   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:12.408769   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:14.410410   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:16.910112   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:19.408777   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:21.410381   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:23.410768   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:25.908736   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:27.910873   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:30.410574   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:32.410928   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:34.907313   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:36.907978   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:39.410775   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:41.908034   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:43.909646   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:46.409183   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:48.909862   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:51.410737   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:53.908132   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:55.909762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:57.909964   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:00.408361   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:02.907705   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:04.908465   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:07.407592   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:09.410553   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:11.907998   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:13.910191   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:16.407762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:18.408926   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:20.907622   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:23.410482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:25.907564   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:27.910471   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:30.409056   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:32.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:34.908332   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:37.407176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:39.409174   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:41.907391   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:44.408139   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:46.906854   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:48.908411   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:51.408059   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:53.907472   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:56.407152   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:58.908572   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:01.407858   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:03.409607   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:05.908557   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:08.408406   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:10.410209   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:12.909722   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:15.407046   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:17.908838   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:20.406716   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:22.407482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:24.408743   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:26.907123   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:28.908785   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:31.406500   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:33.407133   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:35.407499   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:37.409435   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:39.907341   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:42.407598   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:44.408678   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:46.408812   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:48.906968   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:50.907917   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:52.909090   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:55.406872   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:57.407488   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:59.906302   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:01.907576   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:04.408849   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:06.907815   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:09.413253   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:11.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:14.407071   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:16.906996   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:18.900324   14088 pod_ready.go:81] duration metric: took 4m0.006483722s waiting for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	E0531 11:25:18.900352   14088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:25:18.900380   14088 pod_ready.go:38] duration metric: took 4m12.067852815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:25:18.900441   14088 kubeadm.go:630] restartCluster took 4m21.349122361s
	W0531 11:25:18.900579   14088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:25:18.900609   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:25:57.271214   14088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.371057145s)
	I0531 11:25:57.271273   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:25:57.280781   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:25:57.288357   14088 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:25:57.288400   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:25:57.295934   14088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:25:57.295968   14088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:25:57.778963   14088 out.go:204]   - Generating certificates and keys ...
	I0531 11:25:58.867689   14088 out.go:204]   - Booting up control plane ...
	I0531 11:26:05.418464   14088 out.go:204]   - Configuring RBAC rules ...
	I0531 11:26:05.792173   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:26:05.792184   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:26:05.792207   14088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:26:05.792275   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531111947-2169 minikube.k8s.io/updated_at=2022_05_31T11_26_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.792280   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.805712   14088 ops.go:34] apiserver oom_adj: -16
	I0531 11:26:05.878613   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:06.556979   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.056117   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.557355   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:08.056025   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:08.556472   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:09.056023   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:09.556302   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:10.056149   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:10.556128   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:11.056252   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:11.556182   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:12.056522   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:12.556864   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:13.056171   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:13.556181   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:14.056067   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:14.558064   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:15.056191   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:15.557911   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:16.057972   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:16.556078   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:17.056231   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:17.557333   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.056063   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.556169   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.611007   14088 kubeadm.go:1045] duration metric: took 12.818939911s to wait for elevateKubeSystemPrivileges.
	I0531 11:26:18.611026   14088 kubeadm.go:397] StartCluster complete in 5m21.097519624s
	I0531 11:26:18.611048   14088 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:26:18.611133   14088 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:26:18.611692   14088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:26:19.128293   14088 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531111947-2169" rescaled to 1
	I0531 11:26:19.128334   14088 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:26:19.128356   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:26:19.128392   14088 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:26:19.151017   14088 out.go:177] * Verifying Kubernetes components...
	I0531 11:26:19.128876   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:26:19.151067   14088 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151078   14088 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151079   14088 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151095   14088 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.207673   14088 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.207686   14088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207694   14088 addons.go:165] addon dashboard should already be in state true
	I0531 11:26:19.207682   14088 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207705   14088 addons.go:165] addon metrics-server should already be in state true
	I0531 11:26:19.207707   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:26:19.207718   14088 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207732   14088 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:26:19.207745   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207754   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207765   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207998   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209147   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209200   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209252   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.229535   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:26:19.251709   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.359083   14088 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:26:19.395633   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:26:19.417089   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:26:19.417038   14088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:26:19.417220   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.454217   14088 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:26:19.475193   14088 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:26:19.456210   14088 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.475233   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0531 11:26:19.495917   14088 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:26:19.495977   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.496013   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.533139   14088 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:26:19.497046   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.529167   14088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531111947-2169" to be "Ready" ...
	I0531 11:26:19.554395   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:26:19.554412   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:26:19.554509   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.561412   14088 node_ready.go:49] node "default-k8s-different-port-20220531111947-2169" has status "Ready":"True"
	I0531 11:26:19.561427   14088 node_ready.go:38] duration metric: took 7.190013ms waiting for node "default-k8s-different-port-20220531111947-2169" to be "Ready" ...
	I0531 11:26:19.561435   14088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:26:19.568009   14088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2lzlj" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:19.577824   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.580095   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.625288   14088 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:26:19.625300   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:26:19.625376   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.641946   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.702693   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.741613   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:26:19.741626   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:26:19.748847   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:26:19.763443   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:26:19.763457   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:26:19.839993   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:26:19.840010   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:26:19.841474   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:26:19.845785   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:26:19.845802   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:26:19.930906   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:26:19.940791   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:26:19.940810   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:26:20.030646   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:26:20.030658   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:26:20.058555   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:26:20.058577   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:26:20.160899   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:26:20.160914   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:26:20.333881   14088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.104326771s)
	I0531 11:26:20.333902   14088 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:26:20.335732   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:26:20.335745   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:26:20.457100   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:26:20.457113   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:26:20.541664   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:26:20.541680   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:26:20.554506   14088 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:20.559258   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:26:20.559273   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:26:20.573672   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:26:21.082356   14088 pod_ready.go:97] error getting pod "coredns-64897985d-2lzlj" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2lzlj" not found
	I0531 11:26:21.082373   14088 pod_ready.go:81] duration metric: took 1.514366453s waiting for pod "coredns-64897985d-2lzlj" in "kube-system" namespace to be "Ready" ...
	E0531 11:26:21.082383   14088 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-2lzlj" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2lzlj" not found
	I0531 11:26:21.082394   14088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-8gl2g" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:21.368665   14088 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:26:21.425779   14088 addons.go:417] enableAddons completed in 2.297424066s
	I0531 11:26:23.094776   14088 pod_ready.go:102] pod "coredns-64897985d-8gl2g" in "kube-system" namespace has status "Ready":"False"
	I0531 11:26:24.594197   14088 pod_ready.go:92] pod "coredns-64897985d-8gl2g" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.594209   14088 pod_ready.go:81] duration metric: took 3.511847426s waiting for pod "coredns-64897985d-8gl2g" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.594215   14088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.598662   14088 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.598670   14088 pod_ready.go:81] duration metric: took 4.450155ms waiting for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.598675   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.603375   14088 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.603383   14088 pod_ready.go:81] duration metric: took 4.70304ms waiting for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.603389   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.607464   14088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.607472   14088 pod_ready.go:81] duration metric: took 4.07821ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.607478   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcdzt" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.611462   14088 pod_ready.go:92] pod "kube-proxy-qcdzt" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.611471   14088 pod_ready.go:81] duration metric: took 3.988818ms waiting for pod "kube-proxy-qcdzt" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.611478   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.991707   14088 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.991717   14088 pod_ready.go:81] duration metric: took 380.23889ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.991723   14088 pod_ready.go:38] duration metric: took 5.430342115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:26:24.991737   14088 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:26:24.991787   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:26:25.002620   14088 api_server.go:71] duration metric: took 5.8743338s to wait for apiserver process to appear ...
	I0531 11:26:25.002631   14088 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:26:25.002637   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:26:25.007626   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 200:
	ok
	I0531 11:26:25.008889   14088 api_server.go:140] control plane version: v1.23.6
	I0531 11:26:25.008898   14088 api_server.go:130] duration metric: took 6.263543ms to wait for apiserver health ...
	I0531 11:26:25.008903   14088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:26:25.193992   14088 system_pods.go:59] 8 kube-system pods found
	I0531 11:26:25.194006   14088 system_pods.go:61] "coredns-64897985d-8gl2g" [20224d90-4fbc-4797-a5d1-b74e0f14966c] Running
	I0531 11:26:25.194010   14088 system_pods.go:61] "etcd-default-k8s-different-port-20220531111947-2169" [ed7b69e4-94a4-414f-9106-d2dc765aa919] Running
	I0531 11:26:25.194013   14088 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [51fca6d8-ba10-47c5-bc13-a63b7f45905d] Running
	I0531 11:26:25.194017   14088 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [a0094187-8da5-4b65-be3a-5db231aca832] Running
	I0531 11:26:25.194026   14088 system_pods.go:61] "kube-proxy-qcdzt" [650d3c7e-b8a2-4b30-a0fd-9304c714dbeb] Running
	I0531 11:26:25.194030   14088 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [5b644c12-9c33-4dc8-8cf4-677604c45171] Running
	I0531 11:26:25.194038   14088 system_pods.go:61] "metrics-server-b955d9d8-6g9pv" [2396aa61-2370-4463-a547-ab35598222fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:26:25.194043   14088 system_pods.go:61] "storage-provisioner" [5834b6ee-483d-4dee-b45e-e4b5ee0d7da2] Running
	I0531 11:26:25.194048   14088 system_pods.go:74] duration metric: took 185.143863ms to wait for pod list to return data ...
	I0531 11:26:25.194053   14088 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:26:25.391790   14088 default_sa.go:45] found service account: "default"
	I0531 11:26:25.391800   14088 default_sa.go:55] duration metric: took 197.745815ms for default service account to be created ...
	I0531 11:26:25.391805   14088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:26:25.594937   14088 system_pods.go:86] 8 kube-system pods found
	I0531 11:26:25.594950   14088 system_pods.go:89] "coredns-64897985d-8gl2g" [20224d90-4fbc-4797-a5d1-b74e0f14966c] Running
	I0531 11:26:25.594954   14088 system_pods.go:89] "etcd-default-k8s-different-port-20220531111947-2169" [ed7b69e4-94a4-414f-9106-d2dc765aa919] Running
	I0531 11:26:25.594958   14088 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [51fca6d8-ba10-47c5-bc13-a63b7f45905d] Running
	I0531 11:26:25.594961   14088 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [a0094187-8da5-4b65-be3a-5db231aca832] Running
	I0531 11:26:25.594965   14088 system_pods.go:89] "kube-proxy-qcdzt" [650d3c7e-b8a2-4b30-a0fd-9304c714dbeb] Running
	I0531 11:26:25.594970   14088 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [5b644c12-9c33-4dc8-8cf4-677604c45171] Running
	I0531 11:26:25.594977   14088 system_pods.go:89] "metrics-server-b955d9d8-6g9pv" [2396aa61-2370-4463-a547-ab35598222fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:26:25.594984   14088 system_pods.go:89] "storage-provisioner" [5834b6ee-483d-4dee-b45e-e4b5ee0d7da2] Running
	I0531 11:26:25.594988   14088 system_pods.go:126] duration metric: took 203.182672ms to wait for k8s-apps to be running ...
	I0531 11:26:25.594996   14088 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:26:25.595045   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:26:25.606138   14088 system_svc.go:56] duration metric: took 11.141768ms WaitForService to wait for kubelet.
	I0531 11:26:25.606150   14088 kubeadm.go:572] duration metric: took 6.477875035s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:26:25.606163   14088 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:26:25.792428   14088 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:26:25.792439   14088 node_conditions.go:123] node cpu capacity is 6
	I0531 11:26:25.792446   14088 node_conditions.go:105] duration metric: took 186.282277ms to run NodePressure ...
	I0531 11:26:25.792453   14088 start.go:213] waiting for startup goroutines ...
	I0531 11:26:25.822420   14088 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:26:25.845035   14088 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220531111947-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:20:54 UTC, end at Tue 2022-05-31 18:27:18 UTC. --
	May 31 18:25:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:35.256360647Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ab281278fe1b2c2c242b2a145587f3d1de6cd19210b62b65492ee64c5bfcccd6
	May 31 18:25:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:35.280349671Z" level=info msg="ignoring event" container=ab281278fe1b2c2c242b2a145587f3d1de6cd19210b62b65492ee64c5bfcccd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:35.379366453Z" level=info msg="ignoring event" container=dafe4028ada295af5620f6ddadef116fe7f6fd9c305663c11816c32959498aea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.528021726Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1de7eea406fddaf55a444a75d5357bf9fdf9f6870bdec050c1f85b08f36d86f7
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.585206292Z" level=info msg="ignoring event" container=1de7eea406fddaf55a444a75d5357bf9fdf9f6870bdec050c1f85b08f36d86f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.711307160Z" level=info msg="ignoring event" container=a2cffa9cf2011d5ba5dd0d275daec03381934c2a84c784dbbe14f6d5371097c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.818346277Z" level=info msg="ignoring event" container=17c3636e45b576b2ca4e2378428fc53134e263c827fe73cc27f1d89fee2f0817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:55 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:55.882878071Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d67540e41143139239ddc2c9e0a22b4b6bc5500be1f2fd2c436c849197bd510b
	May 31 18:25:55 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:55.912677056Z" level=info msg="ignoring event" container=d67540e41143139239ddc2c9e0a22b4b6bc5500be1f2fd2c436c849197bd510b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.013632059Z" level=info msg="ignoring event" container=18a92c133007eb7e611d6f4ee7f9aecdbd18c46ad46953017d2d143f590cea5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.113429949Z" level=info msg="ignoring event" container=a6646bc1f03a02fddb3b6fb2959de34e611a13f57cb4348199ab7dfdc363e2cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.216261311Z" level=info msg="ignoring event" container=4f6f18f37b905d3618a1aa93efa4cf6d2b69a1b454cc6b493de02a8d4d6a8ffe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.331327946Z" level=info msg="ignoring event" container=3de37b7ea1031712b0f45a1bb06448ed3539e74be07e51b6f32411129aff4c1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:19 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:19.112353285Z" level=info msg="ignoring event" container=046c5962c2b793d74d48a9e8fcb22e0d0f2513a3e62de6bf473768b171dcadb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.688926815Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.688968585Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.690248772Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:22 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:22.655387612Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:26:29 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:29.618516243Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:26:29 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:29.873481009Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:26:33 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:33.000594839Z" level=info msg="ignoring event" container=ae3fab8b2a076e2c8319024b834cf59c943cbea276b83e03e3918640790dd843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:33 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:33.885082397Z" level=info msg="ignoring event" container=63d6a777ef6a9c8f8acf4bf28a41b826246dfae33f578abe10731adbf49ab64b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.977599732Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.977664324Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.978837802Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	63d6a777ef6a9       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   99d535726d0c2
	ed4ea52ccc3d9       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   49 seconds ago       Running             kubernetes-dashboard        0                   d72fd86e980ff
	03c140064443b       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   f6d68ea687c91
	9ccdebca4d293       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   5d5a1cdc62d71
	db58c332639e7       4c03754524064                                                                                    59 seconds ago       Running             kube-proxy                  0                   39bfff4748953
	5b367aa74fd80       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   77657b78cb5d8
	aa61eab3331e7       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   68765bbba059b
	98f1f9d8f4dcb       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   41f2fe8acf6bb
	3403f45d4dee4       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   a4b1333859beb
	
	* 
	* ==> coredns [9ccdebca4d29] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531111947-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531111947-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531111947-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_26_05_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531111947-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:27:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:27:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220531111947-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                77303ae4-ed71-42ab-ab3f-d34a69c51506
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-8gl2g                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     61s
	  kube-system                 etcd-default-k8s-different-port-20220531111947-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531111947-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531111947-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-qcdzt                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531111947-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-b955d9d8-6g9pv                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-58jxt                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-8kcn7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 60s   kube-proxy  
	  Normal  NodeHasSufficientMemory  74s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientPID
	  Normal  Starting                 74s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                63s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeReady
	  Normal  Starting                 3s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3403f45d4dee] <==
	* {"level":"info","ts":"2022-05-31T18:26:00.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:26:00.179Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:26:00.179Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220531111947-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:27:19 up  1:15,  0 users,  load average: 0.41, 0.69, 0.95
	Linux default-k8s-different-port-20220531111947-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [aa61eab3331e] <==
	* I0531 18:26:03.835155       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:26:03.842477       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 18:26:03.845342       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 18:26:03.845371       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 18:26:04.116795       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:26:04.140978       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:26:04.180533       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:26:04.186064       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:26:04.187313       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:26:04.190167       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:26:04.976481       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:26:05.638503       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:26:05.652880       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:26:05.661903       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:26:05.853596       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:26:18.008930       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:26:18.511725       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:26:19.236145       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:26:20.548728       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.32.176]
	I0531 18:26:21.269232       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.29.74]
	I0531 18:26:21.338790       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.107.187]
	W0531 18:26:21.437880       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:26:21.437933       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:26:21.437941       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [98f1f9d8f4dc] <==
	* I0531 18:26:18.712706       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8gl2g"
	I0531 18:26:18.728530       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2lzlj"
	I0531 18:26:20.432631       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:26:20.438399       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0531 18:26:20.441449       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0531 18:26:20.447784       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-6g9pv"
	W0531 18:26:20.823384       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	I0531 18:26:21.136868       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:26:21.142833       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.148843       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 18:26:21.148926       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:26:21.152210       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.152291       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.154252       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.158535       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:26:21.160315       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.160371       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.164648       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.164843       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.167707       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.167754       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.179306       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-8kcn7"
	I0531 18:26:21.236561       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-58jxt"
	E0531 18:27:16.143138       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:27:16.216918       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [db58c332639e] <==
	* I0531 18:26:19.156404       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:26:19.156443       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:26:19.156484       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:26:19.229993       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:26:19.230014       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:26:19.230019       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:26:19.230031       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:26:19.230378       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:26:19.231142       1 config.go:317] "Starting service config controller"
	I0531 18:26:19.231159       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:26:19.231179       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:26:19.231183       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:26:19.332047       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:26:19.332103       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5b367aa74fd8] <==
	* E0531 18:26:02.874368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:26:02.873875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:02.874376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:02.874363       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:26:02.874391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:26:02.873916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:26:02.874489       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:26:02.874556       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:26:02.874696       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:02.874770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.727050       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:03.727102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.826001       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:26:03.826195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:26:03.828079       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:26:03.828158       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:26:03.831615       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:26:03.831670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:26:03.846710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:26:03.846746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:26:03.917086       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:03.917147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.937446       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:26:03.937540       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:26:05.971865       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:20:54 UTC, end at Tue 2022-05-31 18:27:20 UTC. --
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.691777    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vwv\" (UniqueName: \"kubernetes.io/projected/5834b6ee-483d-4dee-b45e-e4b5ee0d7da2-kube-api-access-49vwv\") pod \"storage-provisioner\" (UID: \"5834b6ee-483d-4dee-b45e-e4b5ee0d7da2\") " pod="kube-system/storage-provisioner"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.691841    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5834b6ee-483d-4dee-b45e-e4b5ee0d7da2-tmp\") pod \"storage-provisioner\" (UID: \"5834b6ee-483d-4dee-b45e-e4b5ee0d7da2\") " pod="kube-system/storage-provisioner"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.691899    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-proxy\") pod \"kube-proxy-qcdzt\" (UID: \"650d3c7e-b8a2-4b30-a0fd-9304c714dbeb\") " pod="kube-system/kube-proxy-qcdzt"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.691996    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20224d90-4fbc-4797-a5d1-b74e0f14966c-config-volume\") pod \"coredns-64897985d-8gl2g\" (UID: \"20224d90-4fbc-4797-a5d1-b74e0f14966c\") " pod="kube-system/coredns-64897985d-8gl2g"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692044    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2396aa61-2370-4463-a547-ab35598222fd-tmp-dir\") pod \"metrics-server-b955d9d8-6g9pv\" (UID: \"2396aa61-2370-4463-a547-ab35598222fd\") " pod="kube-system/metrics-server-b955d9d8-6g9pv"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692124    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8bv\" (UniqueName: \"kubernetes.io/projected/b4411164-2a83-42f6-97cc-ea5daad54620-kube-api-access-6d8bv\") pod \"dashboard-metrics-scraper-56974995fc-58jxt\" (UID: \"b4411164-2a83-42f6-97cc-ea5daad54620\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-58jxt"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692144    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l64b\" (UniqueName: \"kubernetes.io/projected/20224d90-4fbc-4797-a5d1-b74e0f14966c-kube-api-access-6l64b\") pod \"coredns-64897985d-8gl2g\" (UID: \"20224d90-4fbc-4797-a5d1-b74e0f14966c\") " pod="kube-system/coredns-64897985d-8gl2g"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692159    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-xtables-lock\") pod \"kube-proxy-qcdzt\" (UID: \"650d3c7e-b8a2-4b30-a0fd-9304c714dbeb\") " pod="kube-system/kube-proxy-qcdzt"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692175    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-lib-modules\") pod \"kube-proxy-qcdzt\" (UID: \"650d3c7e-b8a2-4b30-a0fd-9304c714dbeb\") " pod="kube-system/kube-proxy-qcdzt"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692189    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8hv\" (UniqueName: \"kubernetes.io/projected/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-api-access-rl8hv\") pod \"kube-proxy-qcdzt\" (UID: \"650d3c7e-b8a2-4b30-a0fd-9304c714dbeb\") " pod="kube-system/kube-proxy-qcdzt"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692204    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b45cc5ed-1c03-4907-8209-1b9fa4dc5f17-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-8kcn7\" (UID: \"b45cc5ed-1c03-4907-8209-1b9fa4dc5f17\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8kcn7"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692220    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdg54\" (UniqueName: \"kubernetes.io/projected/b45cc5ed-1c03-4907-8209-1b9fa4dc5f17-kube-api-access-sdg54\") pod \"kubernetes-dashboard-8469778f77-8kcn7\" (UID: \"b45cc5ed-1c03-4907-8209-1b9fa4dc5f17\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8kcn7"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692235    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtszs\" (UniqueName: \"kubernetes.io/projected/2396aa61-2370-4463-a547-ab35598222fd-kube-api-access-rtszs\") pod \"metrics-server-b955d9d8-6g9pv\" (UID: \"2396aa61-2370-4463-a547-ab35598222fd\") " pod="kube-system/metrics-server-b955d9d8-6g9pv"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692255    7012 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.106352    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.264012    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.464121    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:18.659736    7012 request.go:665] Waited for 1.03703191s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.663936    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794361    7012 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794403    7012 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794440    7012 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20224d90-4fbc-4797-a5d1-b74e0f14966c-config-volume podName:20224d90-4fbc-4797-a5d1-b74e0f14966c nodeName:}" failed. No retries permitted until 2022-05-31 18:27:19.294416721 +0000 UTC m=+2.991220061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20224d90-4fbc-4797-a5d1-b74e0f14966c-config-volume") pod "coredns-64897985d-8gl2g" (UID: "20224d90-4fbc-4797-a5d1-b74e0f14966c") : failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794454    7012 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-proxy podName:650d3c7e-b8a2-4b30-a0fd-9304c714dbeb nodeName:}" failed. No retries permitted until 2022-05-31 18:27:19.294447271 +0000 UTC m=+2.991250606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-proxy") pod "kube-proxy-qcdzt" (UID: "650d3c7e-b8a2-4b30-a0fd-9304c714dbeb") : failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:19 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:19.772012    7012 scope.go:110] "RemoveContainer" containerID="63d6a777ef6a9c8f8acf4bf28a41b826246dfae33f578abe10731adbf49ab64b"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: W0531 18:27:20.003251    7012 container.go:489] Failed to get RecentStats("/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4411164_2a83_42f6_97cc_ea5daad54620.slice/docker-2de1e18d40ead2d8eca3ab265b48b371512bd10fdb702de89561d9b9d325e9e4.scope") while determining the next housekeeping: unable to find data in memory cache
	
	* 
	* ==> kubernetes-dashboard [ed4ea52ccc3d] <==
	* 2022/05/31 18:26:29 Using namespace: kubernetes-dashboard
	2022/05/31 18:26:29 Using in-cluster config to connect to apiserver
	2022/05/31 18:26:29 Using secret token for csrf signing
	2022/05/31 18:26:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:26:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:26:29 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:26:29 Generating JWE encryption key
	2022/05/31 18:26:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:26:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:26:29 Initializing JWE encryption key from synchronized object
	2022/05/31 18:26:29 Creating in-cluster Sidecar client
	2022/05/31 18:26:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:26:29 Serving insecurely on HTTP port: 9090
	2022/05/31 18:27:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:26:29 Starting overwatch
	
	* 
	* ==> storage-provisioner [03c140064443] <==
	* I0531 18:26:21.398008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:26:21.405747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:26:21.405826       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:26:21.438225       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:26:21.438303       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"123381d7-0af9-4b94-9365-c6c34f06ee85", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e became leader
	I0531 18:26:21.438566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e!
	I0531 18:26:21.538719       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-6g9pv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv: exit status 1 (268.080347ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-6g9pv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531111947-2169
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531111947-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901",
	        "Created": "2022-05-31T18:19:53.754328959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254580,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:20:54.210493022Z",
	            "FinishedAt": "2022-05-31T18:20:52.286893922Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/hostname",
	        "HostsPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/hosts",
	        "LogPath": "/var/lib/docker/containers/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901/2126d010e5964a6bfe2a07c4ed4946fc7b3d7bcd468f2bd77a9fc76d88ff7901-json.log",
	        "Name": "/default-k8s-different-port-20220531111947-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531111947-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531111947-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38143735d7f8c46ea5f88cd36796f56f1e3e375f3b2b9cb79c1cb4443f78bed7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531111947-2169",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531111947-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531111947-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531111947-2169",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531111947-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625190a53b984200ac4c4136adfbae8f8188de966ffdbd8935d4eba14b515e91",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53878"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53879"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53880"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/625190a53b98",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531111947-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2126d010e596",
	                        "default-k8s-different-port-20220531111947-2169"
	                    ],
	                    "NetworkID": "edbf55a2a15ca8d0c53f946fc87d4d604387c6b971b5f4b18e149d39e0a8f4e3",
	                    "EndpointID": "ef1a715e30ff0ba626cd15ad1356e4341a49632551d71ff19cbd6bf89d5dd6bc",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220531111947-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220531111947-2169 logs -n 25: (2.543395159s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | no-preload-20220531110349-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | no-preload-20220531110349-2169                    |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:12 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:12 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:13 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:16 PDT | 31 May 22 11:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:13 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:18 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:18 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531111208-2169                   | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220531111946-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | disable-driver-mounts-20220531111946-2169         |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169               | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169    | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:20:52
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:20:52.944881   14088 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:20:52.945084   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945089   14088 out.go:309] Setting ErrFile to fd 2...
	I0531 11:20:52.945093   14088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:20:52.945194   14088 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:20:52.945466   14088 out.go:303] Setting JSON to false
	I0531 11:20:52.960339   14088 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4821,"bootTime":1654016431,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:20:52.960440   14088 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:20:52.982638   14088 out.go:177] * [default-k8s-different-port-20220531111947-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:20:53.025482   14088 notify.go:193] Checking for updates...
	I0531 11:20:53.047412   14088 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:20:53.069297   14088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:53.090403   14088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:20:53.112640   14088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:20:53.134605   14088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:20:53.156922   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:53.157647   14088 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:20:53.231919   14088 docker.go:137] docker version: linux-20.10.14
	I0531 11:20:53.232051   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.359110   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.293756437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.402586   14088 out.go:177] * Using the docker driver based on existing profile
	I0531 11:20:53.424356   14088 start.go:284] selected driver: docker
	I0531 11:20:53.424384   14088 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.424528   14088 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:20:53.427949   14088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:20:53.551765   14088 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:20:53.48889853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:20:53.551941   14088 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 11:20:53.551960   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:53.551966   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:53.551973   14088 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:53.574240   14088 out.go:177] * Starting control plane node default-k8s-different-port-20220531111947-2169 in cluster default-k8s-different-port-20220531111947-2169
	I0531 11:20:53.595811   14088 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:20:53.617672   14088 out.go:177] * Pulling base image ...
	I0531 11:20:53.660942   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:53.661017   14088 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:20:53.661021   14088 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:20:53.661045   14088 cache.go:57] Caching tarball of preloaded images
	I0531 11:20:53.661255   14088 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:20:53.661288   14088 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:20:53.662334   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:53.728191   14088 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:20:53.728208   14088 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:20:53.728219   14088 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:20:53.728284   14088 start.go:352] acquiring machines lock for default-k8s-different-port-20220531111947-2169: {Name:mk78e9fe98c6a3e232878ce765bd193e5b506828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:20:53.728368   14088 start.go:356] acquired machines lock for "default-k8s-different-port-20220531111947-2169" in 55.664µs
	I0531 11:20:53.728390   14088 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:20:53.728397   14088 fix.go:55] fixHost starting: 
	I0531 11:20:53.728613   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:53.795533   14088 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531111947-2169: state=Stopped err=<nil>
	W0531 11:20:53.795566   14088 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:20:53.839440   14088 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531111947-2169" ...
	I0531 11:20:53.861504   14088 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.214277   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:20:54.285676   14088 kic.go:416] container "default-k8s-different-port-20220531111947-2169" state is running.
	I0531 11:20:54.286268   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.359103   14088 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/config.json ...
	I0531 11:20:54.359483   14088 machine.go:88] provisioning docker machine ...
	I0531 11:20:54.359511   14088 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531111947-2169"
	I0531 11:20:54.359571   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.431991   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.432193   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.432206   14088 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531111947-2169 && echo "default-k8s-different-port-20220531111947-2169" | sudo tee /etc/hostname
	I0531 11:20:54.553685   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531111947-2169
	
	I0531 11:20:54.553769   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.625847   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:54.625998   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:54.626013   14088 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531111947-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531111947-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531111947-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:20:54.740939   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:54.740960   14088 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:20:54.740983   14088 ubuntu.go:177] setting up certificates
	I0531 11:20:54.740993   14088 provision.go:83] configureAuth start
	I0531 11:20:54.741060   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:54.813502   14088 provision.go:138] copyHostCerts
	I0531 11:20:54.813586   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:20:54.813597   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:20:54.813681   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:20:54.813909   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:20:54.813929   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:20:54.813988   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:20:54.814120   14088 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:20:54.814127   14088 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:20:54.814187   14088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:20:54.814303   14088 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531111947-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531111947-2169]
	I0531 11:20:54.984093   14088 provision.go:172] copyRemoteCerts
	I0531 11:20:54.984161   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:20:54.984204   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.054898   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.140792   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:20:55.157975   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 11:20:55.174955   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:20:55.192282   14088 provision.go:86] duration metric: configureAuth took 451.28007ms
	I0531 11:20:55.192295   14088 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:20:55.192463   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:20:55.192523   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.261854   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.262008   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.262018   14088 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:20:55.374013   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:20:55.374024   14088 ubuntu.go:71] root file system type: overlay
	I0531 11:20:55.374182   14088 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:20:55.374259   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.444497   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.444646   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.444717   14088 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:20:55.566811   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:20:55.566903   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.637162   14088 main.go:134] libmachine: Using SSH client type: native
	I0531 11:20:55.637315   14088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53881 <nil> <nil>}
	I0531 11:20:55.637331   14088 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:20:55.756943   14088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:20:55.756956   14088 machine.go:91] provisioned docker machine in 1.397481881s
	I0531 11:20:55.756966   14088 start.go:306] post-start starting for "default-k8s-different-port-20220531111947-2169" (driver="docker")
	I0531 11:20:55.756972   14088 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:20:55.757026   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:20:55.757069   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:55.826937   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:55.911267   14088 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:20:55.914903   14088 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:20:55.914917   14088 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:20:55.914925   14088 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:20:55.914929   14088 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:20:55.914937   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:20:55.915031   14088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:20:55.915160   14088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:20:55.915312   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:20:55.922282   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:55.940097   14088 start.go:309] post-start completed in 183.123925ms
	I0531 11:20:55.940169   14088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:20:55.940229   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.011052   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.090867   14088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:20:56.095336   14088 fix.go:57] fixHost completed within 2.36696395s
	I0531 11:20:56.095355   14088 start.go:81] releasing machines lock for "default-k8s-different-port-20220531111947-2169", held for 2.367007248s
	I0531 11:20:56.095435   14088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165619   14088 ssh_runner.go:195] Run: systemctl --version
	I0531 11:20:56.165622   14088 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:20:56.165682   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.165696   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:56.241266   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.242997   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:20:56.324128   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:20:56.452142   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.462221   14088 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:20:56.462271   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:20:56.472988   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:20:56.486894   14088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:20:56.552509   14088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:20:56.624892   14088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:20:56.634564   14088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:20:56.697617   14088 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:20:56.707335   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.742123   14088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:20:56.823696   14088 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:20:56.823896   14088 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220531111947-2169 dig +short host.docker.internal
	I0531 11:20:56.945635   14088 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:20:56.945757   14088 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:20:56.950017   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:56.959956   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.030149   14088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:20:57.030228   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.061515   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.061532   14088 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:20:57.061601   14088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:20:57.092363   14088 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 11:20:57.092379   14088 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:20:57.092456   14088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:20:57.165549   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:20:57.165561   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:20:57.165578   14088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 11:20:57.165607   14088 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531111947-2169 NodeName:default-k8s-different-port-20220531111947-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:20:57.165740   14088 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220531111947-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:20:57.165815   14088 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220531111947-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 11:20:57.165869   14088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:20:57.174125   14088 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:20:57.174180   14088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:20:57.181428   14088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0531 11:20:57.193769   14088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:20:57.206426   14088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0531 11:20:57.218608   14088 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:20:57.222399   14088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:20:57.231856   14088 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169 for IP: 192.168.58.2
	I0531 11:20:57.231960   14088 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:20:57.232024   14088 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:20:57.232114   14088 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/client.key
	I0531 11:20:57.232170   14088 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key.cee25041
	I0531 11:20:57.232221   14088 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key
	I0531 11:20:57.232425   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:20:57.232955   14088 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:20:57.232980   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:20:57.233064   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:20:57.233187   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:20:57.233279   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:20:57.233411   14088 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:20:57.234195   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:20:57.251232   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 11:20:57.268133   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:20:57.284962   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531111947-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 11:20:57.302399   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:20:57.319341   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:20:57.336647   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:20:57.353997   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:20:57.370819   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:20:57.388311   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:20:57.405153   14088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:20:57.422579   14088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:20:57.435885   14088 ssh_runner.go:195] Run: openssl version
	I0531 11:20:57.441459   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:20:57.449337   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453299   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.453340   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:20:57.458858   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:20:57.467841   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:20:57.476418   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480355   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.480411   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:20:57.485744   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:20:57.493027   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:20:57.500863   14088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.504963   14088 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.505012   14088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:20:57.510223   14088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:20:57.517409   14088 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531111947-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531111947-2169
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:20:57.517502   14088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:20:57.546434   14088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:20:57.554470   14088 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:20:57.554485   14088 kubeadm.go:626] restartCluster start
	I0531 11:20:57.554529   14088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:20:57.561236   14088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.561291   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:20:57.632105   14088 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531111947-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:20:57.632282   14088 kubeconfig.go:127] "default-k8s-different-port-20220531111947-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:20:57.632611   14088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:20:57.633766   14088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:20:57.641261   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.641314   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.649587   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:57.851170   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:57.851374   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:57.861667   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.049817   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.049887   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.059064   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.250141   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.250267   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.260725   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.449710   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.449793   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.459510   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.651089   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.651214   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.661996   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:58.851729   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:58.851819   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:58.861332   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.051735   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.051889   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.063335   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.250489   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.250612   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.261366   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.451746   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.451884   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.461795   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.651683   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.651840   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.662763   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:20:59.851738   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:20:59.851863   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:20:59.862352   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.051822   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.051919   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.060856   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.251144   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.251295   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.262356   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.450234   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.450389   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.460742   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.650557   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.650686   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.661258   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.661270   14088 api_server.go:165] Checking apiserver status ...
	I0531 11:21:00.661321   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:21:00.670223   14088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.670238   14088 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:21:00.670248   14088 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:21:00.670310   14088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:21:00.699869   14088 docker.go:442] Stopping containers: [b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f]
	I0531 11:21:00.699944   14088 ssh_runner.go:195] Run: docker stop b48c62911956 39ecd49e2959 0d1c428e0118 1d43fd380df3 5f61410a1644 fc5d85a557ec 018e14d1f471 e572fe01902d bab412bceb10 e5581b46b9e9 93aa5f139910 96cf36883161 2671bf2afe6f a57dbeccaab4 ed5be2dbd485 b2ae6df97b5f
	I0531 11:21:00.731543   14088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:21:00.744008   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:21:00.751326   14088 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 18:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 18:20 /etc/kubernetes/scheduler.conf
	
	I0531 11:21:00.751370   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 11:21:00.758546   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 11:21:00.765681   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.772792   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.772846   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:21:00.779694   14088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 11:21:00.786641   14088 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:21:00.786689   14088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:21:00.793632   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.800995   14088 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:21:00.801006   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:00.845457   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.402938   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.519525   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.564444   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:01.614093   14088 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:21:01.614155   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.126101   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.624157   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:21:02.670383   14088 api_server.go:71] duration metric: took 1.056305017s to wait for apiserver process to appear ...
	I0531 11:21:02.670406   14088 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:21:02.670419   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:02.671578   14088 api_server.go:256] stopped: https://127.0.0.1:53880/healthz: Get "https://127.0.0.1:53880/healthz": EOF
	I0531 11:21:03.172114   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.213534   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:21:05.213551   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:21:05.671899   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:05.679000   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:05.679016   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.171614   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.177304   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:21:06.177318   14088 api_server.go:102] status: https://127.0.0.1:53880/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:21:06.671709   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:21:06.677584   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 200:
	ok
	I0531 11:21:06.684407   14088 api_server.go:140] control plane version: v1.23.6
	I0531 11:21:06.684421   14088 api_server.go:130] duration metric: took 4.014058772s to wait for apiserver health ...
	I0531 11:21:06.684426   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:21:06.684430   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:21:06.684440   14088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:21:06.692117   14088 system_pods.go:59] 8 kube-system pods found
	I0531 11:21:06.692131   14088 system_pods.go:61] "coredns-64897985d-hw9jj" [a99971df-076d-4aba-a217-a2a75c87a745] Running
	I0531 11:21:06.692141   14088 system_pods.go:61] "etcd-default-k8s-different-port-20220531111947-2169" [297b8c39-20c3-4101-878e-1fab3854f875] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 11:21:06.692146   14088 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [d3af2377-33bb-4d77-873c-bf4d620b1ccc] Running
	I0531 11:21:06.692152   14088 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [3f64c0cd-80e0-4f01-b61c-62d6914342cc] Running
	I0531 11:21:06.692156   14088 system_pods.go:61] "kube-proxy-4ljp8" [b5ef4698-6857-48cc-828a-26043bc6f05f] Running
	I0531 11:21:06.692159   14088 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [8efaf333-e7f0-4eb4-ace3-68210d3b9d66] Running
	I0531 11:21:06.692166   14088 system_pods.go:61] "metrics-server-b955d9d8-dj4pb" [837a7b7e-0528-4b97-af67-3dab5106f2a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:21:06.692172   14088 system_pods.go:61] "storage-provisioner" [45148f19-69b5-4e40-a3e5-284bafef13b2] Running
	I0531 11:21:06.692175   14088 system_pods.go:74] duration metric: took 7.732726ms to wait for pod list to return data ...
	I0531 11:21:06.692181   14088 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:21:06.695801   14088 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:21:06.695817   14088 node_conditions.go:123] node cpu capacity is 6
	I0531 11:21:06.695829   14088 node_conditions.go:105] duration metric: took 3.644789ms to run NodePressure ...
	I0531 11:21:06.695842   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:21:06.831589   14088 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835556   14088 kubeadm.go:777] kubelet initialised
	I0531 11:21:06.835567   14088 kubeadm.go:778] duration metric: took 3.96448ms waiting for restarted kubelet to initialise ...
	I0531 11:21:06.835577   14088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:21:06.841061   14088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857619   14088 pod_ready.go:92] pod "coredns-64897985d-hw9jj" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:06.857637   14088 pod_ready.go:81] duration metric: took 16.555578ms waiting for pod "coredns-64897985d-hw9jj" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:06.857647   14088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:08.871386   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:11.369552   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:13.868531   14088 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:14.869966   14088 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:14.869979   14088 pod_ready.go:81] duration metric: took 8.012423735s waiting for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:14.869987   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:16.883059   14088 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:18.883566   14088 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.883577   14088 pod_ready.go:81] duration metric: took 4.013634265s waiting for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.883584   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888077   14088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.888084   14088 pod_ready.go:81] duration metric: took 4.485217ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.888090   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892647   14088 pod_ready.go:92] pod "kube-proxy-4ljp8" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.892655   14088 pod_ready.go:81] duration metric: took 4.561071ms waiting for pod "kube-proxy-4ljp8" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.892661   14088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896699   14088 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:21:18.896707   14088 pod_ready.go:81] duration metric: took 4.041445ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:18.896713   14088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	I0531 11:21:20.908715   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:23.409540   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:25.909327   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:28.408295   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:30.409483   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:32.411745   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:34.911779   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:37.408629   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:39.412518   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:41.908120   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:43.908385   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:45.910144   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:48.411558   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:50.907773   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:52.911197   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:55.409130   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:21:57.909519   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:00.411176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:02.907272   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:04.911276   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:07.408653   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:09.909908   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:12.408769   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:14.410410   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:16.910112   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:19.408777   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:21.410381   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:23.410768   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:25.908736   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:27.910873   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:30.410574   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:32.410928   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:34.907313   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:36.907978   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:39.410775   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:41.908034   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:43.909646   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:46.409183   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:48.909862   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:51.410737   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:53.908132   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:55.909762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:22:57.909964   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:00.408361   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:02.907705   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:04.908465   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:07.407592   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:09.410553   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:11.907998   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:13.910191   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:16.407762   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:18.408926   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:20.907622   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:23.410482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:25.907564   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:27.910471   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:30.409056   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:32.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:34.908332   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:37.407176   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:39.409174   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:41.907391   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:44.408139   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:46.906854   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:48.908411   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:51.408059   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:53.907472   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:56.407152   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:23:58.908572   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:01.407858   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:03.409607   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:05.908557   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:08.408406   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:10.410209   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:12.909722   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:15.407046   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:17.908838   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:20.406716   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:22.407482   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:24.408743   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:26.907123   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:28.908785   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:31.406500   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:33.407133   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:35.407499   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:37.409435   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:39.907341   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:42.407598   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:44.408678   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:46.408812   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:48.906968   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:50.907917   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:52.909090   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:55.406872   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:57.407488   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:24:59.906302   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:01.907576   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:04.408849   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:06.907815   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:09.413253   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:11.908170   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:14.407071   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:16.906996   14088 pod_ready.go:102] pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace has status "Ready":"False"
	I0531 11:25:18.900324   14088 pod_ready.go:81] duration metric: took 4m0.006483722s waiting for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" ...
	E0531 11:25:18.900352   14088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-dj4pb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 11:25:18.900380   14088 pod_ready.go:38] duration metric: took 4m12.067852815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:25:18.900441   14088 kubeadm.go:630] restartCluster took 4m21.349122361s
	W0531 11:25:18.900579   14088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 11:25:18.900609   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 11:25:57.271214   14088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.371057145s)
	I0531 11:25:57.271273   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:25:57.280781   14088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:25:57.288357   14088 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 11:25:57.288400   14088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:25:57.295934   14088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 11:25:57.295968   14088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 11:25:57.778963   14088 out.go:204]   - Generating certificates and keys ...
	I0531 11:25:58.867689   14088 out.go:204]   - Booting up control plane ...
	I0531 11:26:05.418464   14088 out.go:204]   - Configuring RBAC rules ...
	I0531 11:26:05.792173   14088 cni.go:95] Creating CNI manager for ""
	I0531 11:26:05.792184   14088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:26:05.792207   14088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:26:05.792275   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531111947-2169 minikube.k8s.io/updated_at=2022_05_31T11_26_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.792280   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:05.805712   14088 ops.go:34] apiserver oom_adj: -16
	I0531 11:26:05.878613   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:06.556979   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.056117   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:07.557355   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:08.056025   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:08.556472   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:09.056023   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:09.556302   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:10.056149   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:10.556128   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:11.056252   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:11.556182   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:12.056522   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:12.556864   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:13.056171   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:13.556181   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:14.056067   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:14.558064   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:15.056191   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:15.557911   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:16.057972   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:16.556078   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:17.056231   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:17.557333   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.056063   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.556169   14088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 11:26:18.611007   14088 kubeadm.go:1045] duration metric: took 12.818939911s to wait for elevateKubeSystemPrivileges.
	I0531 11:26:18.611026   14088 kubeadm.go:397] StartCluster complete in 5m21.097519624s
	I0531 11:26:18.611048   14088 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:26:18.611133   14088 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:26:18.611692   14088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:26:19.128293   14088 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531111947-2169" rescaled to 1
	I0531 11:26:19.128334   14088 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:26:19.128356   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:26:19.128392   14088 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:26:19.151017   14088 out.go:177] * Verifying Kubernetes components...
	I0531 11:26:19.128876   14088 config.go:178] Loaded profile config "default-k8s-different-port-20220531111947-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:26:19.151067   14088 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151078   14088 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151079   14088 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.151095   14088 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.207673   14088 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.207686   14088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207694   14088 addons.go:165] addon dashboard should already be in state true
	I0531 11:26:19.207682   14088 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207705   14088 addons.go:165] addon metrics-server should already be in state true
	I0531 11:26:19.207707   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:26:19.207718   14088 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531111947-2169"
	W0531 11:26:19.207732   14088 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:26:19.207745   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207754   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207765   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.207998   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209147   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209200   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.209252   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.229535   14088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 11:26:19.251709   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.359083   14088 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:26:19.395633   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:26:19.417089   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:26:19.417038   14088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 11:26:19.417220   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.454217   14088 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:26:19.475193   14088 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:26:19.456210   14088 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:19.475233   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0531 11:26:19.495917   14088 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:26:19.495977   14088 host.go:66] Checking if "default-k8s-different-port-20220531111947-2169" exists ...
	I0531 11:26:19.496013   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.533139   14088 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:26:19.497046   14088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531111947-2169 --format={{.State.Status}}
	I0531 11:26:19.529167   14088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531111947-2169" to be "Ready" ...
	I0531 11:26:19.554395   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:26:19.554412   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:26:19.554509   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.561412   14088 node_ready.go:49] node "default-k8s-different-port-20220531111947-2169" has status "Ready":"True"
	I0531 11:26:19.561427   14088 node_ready.go:38] duration metric: took 7.190013ms waiting for node "default-k8s-different-port-20220531111947-2169" to be "Ready" ...
	I0531 11:26:19.561435   14088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:26:19.568009   14088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-2lzlj" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:19.577824   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.580095   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.625288   14088 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:26:19.625300   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:26:19.625376   14088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531111947-2169
	I0531 11:26:19.641946   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.702693   14088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53881 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531111947-2169/id_rsa Username:docker}
	I0531 11:26:19.741613   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:26:19.741626   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:26:19.748847   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:26:19.763443   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:26:19.763457   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:26:19.839993   14088 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:26:19.840010   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:26:19.841474   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:26:19.845785   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:26:19.845802   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:26:19.930906   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:26:19.940791   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:26:19.940810   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:26:20.030646   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:26:20.030658   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:26:20.058555   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:26:20.058577   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:26:20.160899   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:26:20.160914   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:26:20.333881   14088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.104326771s)
	I0531 11:26:20.333902   14088 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 11:26:20.335732   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:26:20.335745   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:26:20.457100   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:26:20.457113   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:26:20.541664   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:26:20.541680   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:26:20.554506   14088 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531111947-2169"
	I0531 11:26:20.559258   14088 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:26:20.559273   14088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:26:20.573672   14088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:26:21.082356   14088 pod_ready.go:97] error getting pod "coredns-64897985d-2lzlj" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2lzlj" not found
	I0531 11:26:21.082373   14088 pod_ready.go:81] duration metric: took 1.514366453s waiting for pod "coredns-64897985d-2lzlj" in "kube-system" namespace to be "Ready" ...
	E0531 11:26:21.082383   14088 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-2lzlj" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2lzlj" not found
	I0531 11:26:21.082394   14088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-8gl2g" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:21.368665   14088 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:26:21.425779   14088 addons.go:417] enableAddons completed in 2.297424066s
	I0531 11:26:23.094776   14088 pod_ready.go:102] pod "coredns-64897985d-8gl2g" in "kube-system" namespace has status "Ready":"False"
	I0531 11:26:24.594197   14088 pod_ready.go:92] pod "coredns-64897985d-8gl2g" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.594209   14088 pod_ready.go:81] duration metric: took 3.511847426s waiting for pod "coredns-64897985d-8gl2g" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.594215   14088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.598662   14088 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.598670   14088 pod_ready.go:81] duration metric: took 4.450155ms waiting for pod "etcd-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.598675   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.603375   14088 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.603383   14088 pod_ready.go:81] duration metric: took 4.70304ms waiting for pod "kube-apiserver-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.603389   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.607464   14088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.607472   14088 pod_ready.go:81] duration metric: took 4.07821ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.607478   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcdzt" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.611462   14088 pod_ready.go:92] pod "kube-proxy-qcdzt" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.611471   14088 pod_ready.go:81] duration metric: took 3.988818ms waiting for pod "kube-proxy-qcdzt" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.611478   14088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.991707   14088 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace has status "Ready":"True"
	I0531 11:26:24.991717   14088 pod_ready.go:81] duration metric: took 380.23889ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531111947-2169" in "kube-system" namespace to be "Ready" ...
	I0531 11:26:24.991723   14088 pod_ready.go:38] duration metric: took 5.430342115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 11:26:24.991737   14088 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:26:24.991787   14088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:26:25.002620   14088 api_server.go:71] duration metric: took 5.8743338s to wait for apiserver process to appear ...
	I0531 11:26:25.002631   14088 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:26:25.002637   14088 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53880/healthz ...
	I0531 11:26:25.007626   14088 api_server.go:266] https://127.0.0.1:53880/healthz returned 200:
	ok
	I0531 11:26:25.008889   14088 api_server.go:140] control plane version: v1.23.6
	I0531 11:26:25.008898   14088 api_server.go:130] duration metric: took 6.263543ms to wait for apiserver health ...
	I0531 11:26:25.008903   14088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:26:25.193992   14088 system_pods.go:59] 8 kube-system pods found
	I0531 11:26:25.194006   14088 system_pods.go:61] "coredns-64897985d-8gl2g" [20224d90-4fbc-4797-a5d1-b74e0f14966c] Running
	I0531 11:26:25.194010   14088 system_pods.go:61] "etcd-default-k8s-different-port-20220531111947-2169" [ed7b69e4-94a4-414f-9106-d2dc765aa919] Running
	I0531 11:26:25.194013   14088 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [51fca6d8-ba10-47c5-bc13-a63b7f45905d] Running
	I0531 11:26:25.194017   14088 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [a0094187-8da5-4b65-be3a-5db231aca832] Running
	I0531 11:26:25.194026   14088 system_pods.go:61] "kube-proxy-qcdzt" [650d3c7e-b8a2-4b30-a0fd-9304c714dbeb] Running
	I0531 11:26:25.194030   14088 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [5b644c12-9c33-4dc8-8cf4-677604c45171] Running
	I0531 11:26:25.194038   14088 system_pods.go:61] "metrics-server-b955d9d8-6g9pv" [2396aa61-2370-4463-a547-ab35598222fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:26:25.194043   14088 system_pods.go:61] "storage-provisioner" [5834b6ee-483d-4dee-b45e-e4b5ee0d7da2] Running
	I0531 11:26:25.194048   14088 system_pods.go:74] duration metric: took 185.143863ms to wait for pod list to return data ...
	I0531 11:26:25.194053   14088 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:26:25.391790   14088 default_sa.go:45] found service account: "default"
	I0531 11:26:25.391800   14088 default_sa.go:55] duration metric: took 197.745815ms for default service account to be created ...
	I0531 11:26:25.391805   14088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 11:26:25.594937   14088 system_pods.go:86] 8 kube-system pods found
	I0531 11:26:25.594950   14088 system_pods.go:89] "coredns-64897985d-8gl2g" [20224d90-4fbc-4797-a5d1-b74e0f14966c] Running
	I0531 11:26:25.594954   14088 system_pods.go:89] "etcd-default-k8s-different-port-20220531111947-2169" [ed7b69e4-94a4-414f-9106-d2dc765aa919] Running
	I0531 11:26:25.594958   14088 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220531111947-2169" [51fca6d8-ba10-47c5-bc13-a63b7f45905d] Running
	I0531 11:26:25.594961   14088 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220531111947-2169" [a0094187-8da5-4b65-be3a-5db231aca832] Running
	I0531 11:26:25.594965   14088 system_pods.go:89] "kube-proxy-qcdzt" [650d3c7e-b8a2-4b30-a0fd-9304c714dbeb] Running
	I0531 11:26:25.594970   14088 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220531111947-2169" [5b644c12-9c33-4dc8-8cf4-677604c45171] Running
	I0531 11:26:25.594977   14088 system_pods.go:89] "metrics-server-b955d9d8-6g9pv" [2396aa61-2370-4463-a547-ab35598222fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:26:25.594984   14088 system_pods.go:89] "storage-provisioner" [5834b6ee-483d-4dee-b45e-e4b5ee0d7da2] Running
	I0531 11:26:25.594988   14088 system_pods.go:126] duration metric: took 203.182672ms to wait for k8s-apps to be running ...
	I0531 11:26:25.594996   14088 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 11:26:25.595045   14088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:26:25.606138   14088 system_svc.go:56] duration metric: took 11.141768ms WaitForService to wait for kubelet.
	I0531 11:26:25.606150   14088 kubeadm.go:572] duration metric: took 6.477875035s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 11:26:25.606163   14088 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:26:25.792428   14088 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:26:25.792439   14088 node_conditions.go:123] node cpu capacity is 6
	I0531 11:26:25.792446   14088 node_conditions.go:105] duration metric: took 186.282277ms to run NodePressure ...
	I0531 11:26:25.792453   14088 start.go:213] waiting for startup goroutines ...
	I0531 11:26:25.822420   14088 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:26:25.845035   14088 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220531111947-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:20:54 UTC, end at Tue 2022-05-31 18:27:23 UTC. --
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.585206292Z" level=info msg="ignoring event" container=1de7eea406fddaf55a444a75d5357bf9fdf9f6870bdec050c1f85b08f36d86f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.711307160Z" level=info msg="ignoring event" container=a2cffa9cf2011d5ba5dd0d275daec03381934c2a84c784dbbe14f6d5371097c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:45 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:45.818346277Z" level=info msg="ignoring event" container=17c3636e45b576b2ca4e2378428fc53134e263c827fe73cc27f1d89fee2f0817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:55 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:55.882878071Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d67540e41143139239ddc2c9e0a22b4b6bc5500be1f2fd2c436c849197bd510b
	May 31 18:25:55 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:55.912677056Z" level=info msg="ignoring event" container=d67540e41143139239ddc2c9e0a22b4b6bc5500be1f2fd2c436c849197bd510b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.013632059Z" level=info msg="ignoring event" container=18a92c133007eb7e611d6f4ee7f9aecdbd18c46ad46953017d2d143f590cea5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.113429949Z" level=info msg="ignoring event" container=a6646bc1f03a02fddb3b6fb2959de34e611a13f57cb4348199ab7dfdc363e2cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.216261311Z" level=info msg="ignoring event" container=4f6f18f37b905d3618a1aa93efa4cf6d2b69a1b454cc6b493de02a8d4d6a8ffe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:25:56 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:25:56.331327946Z" level=info msg="ignoring event" container=3de37b7ea1031712b0f45a1bb06448ed3539e74be07e51b6f32411129aff4c1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:19 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:19.112353285Z" level=info msg="ignoring event" container=046c5962c2b793d74d48a9e8fcb22e0d0f2513a3e62de6bf473768b171dcadb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.688926815Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.688968585Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:21 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:21.690248772Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:22 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:22.655387612Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 18:26:29 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:29.618516243Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:26:29 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:29.873481009Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 18:26:33 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:33.000594839Z" level=info msg="ignoring event" container=ae3fab8b2a076e2c8319024b834cf59c943cbea276b83e03e3918640790dd843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:33 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:33.885082397Z" level=info msg="ignoring event" container=63d6a777ef6a9c8f8acf4bf28a41b826246dfae33f578abe10731adbf49ab64b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.977599732Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.977664324Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:26:35 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:26:35.978837802Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:27:19 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:27:19.981973708Z" level=info msg="ignoring event" container=2de1e18d40ead2d8eca3ab265b48b371512bd10fdb702de89561d9b9d325e9e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:27:20.471649363Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:27:20.471705254Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 dockerd[130]: time="2022-05-31T18:27:20.473159667Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	2de1e18d40ead       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   2                   99d535726d0c2
	ed4ea52ccc3d9       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   54 seconds ago       Running             kubernetes-dashboard        0                   d72fd86e980ff
	03c140064443b       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f6d68ea687c91
	9ccdebca4d293       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   5d5a1cdc62d71
	db58c332639e7       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   39bfff4748953
	5b367aa74fd80       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   77657b78cb5d8
	aa61eab3331e7       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   68765bbba059b
	98f1f9d8f4dcb       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   41f2fe8acf6bb
	3403f45d4dee4       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   a4b1333859beb
	
	* 
	* ==> coredns [9ccdebca4d29] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531111947-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531111947-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531111947-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_26_05_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531111947-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:27:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:26:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:27:16 +0000   Tue, 31 May 2022 18:27:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220531111947-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                77303ae4-ed71-42ab-ab3f-d34a69c51506
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-8gl2g                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-default-k8s-different-port-20220531111947-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531111947-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531111947-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-qcdzt                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531111947-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-b955d9d8-6g9pv                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-58jxt                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-8kcn7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 64s   kube-proxy  
	  Normal  NodeHasSufficientMemory  78s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientPID
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                67s   kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeReady
	  Normal  Starting                 7s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s    kubelet     Node default-k8s-different-port-20220531111947-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3403f45d4dee] <==
	* {"level":"info","ts":"2022-05-31T18:26:00.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:26:00.179Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:26:00.179Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:26:00.180Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220531111947-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:26:00.971Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:26:00.972Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:27:23 up  1:15,  0 users,  load average: 0.37, 0.67, 0.95
	Linux default-k8s-different-port-20220531111947-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [aa61eab3331e] <==
	* I0531 18:26:04.116795       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:26:04.140978       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:26:04.180533       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 18:26:04.186064       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 18:26:04.187313       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:26:04.190167       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:26:04.976481       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:26:05.638503       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:26:05.652880       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:26:05.661903       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:26:05.853596       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:26:18.008930       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:26:18.511725       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:26:19.236145       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:26:20.548728       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.32.176]
	I0531 18:26:21.269232       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.29.74]
	I0531 18:26:21.338790       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.107.187]
	W0531 18:26:21.437880       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:26:21.437933       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:26:21.437941       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:27:21.394244       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:27:21.394353       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:27:21.394381       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [98f1f9d8f4dc] <==
	* I0531 18:26:18.712706       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8gl2g"
	I0531 18:26:18.728530       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2lzlj"
	I0531 18:26:20.432631       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:26:20.438399       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0531 18:26:20.441449       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0531 18:26:20.447784       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-6g9pv"
	W0531 18:26:20.823384       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	I0531 18:26:21.136868       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:26:21.142833       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.148843       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0531 18:26:21.148926       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:26:21.152210       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.152291       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.154252       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.158535       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:26:21.160315       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.160371       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.164648       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.164843       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:26:21.167707       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:26:21.167754       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:26:21.179306       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-8kcn7"
	I0531 18:26:21.236561       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-58jxt"
	E0531 18:27:16.143138       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:27:16.216918       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [db58c332639e] <==
	* I0531 18:26:19.156404       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:26:19.156443       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:26:19.156484       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:26:19.229993       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:26:19.230014       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:26:19.230019       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:26:19.230031       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:26:19.230378       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:26:19.231142       1 config.go:317] "Starting service config controller"
	I0531 18:26:19.231159       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:26:19.231179       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:26:19.231183       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:26:19.332047       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:26:19.332103       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5b367aa74fd8] <==
	* E0531 18:26:02.874368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:26:02.873875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:02.874376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:02.874363       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:26:02.874391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:26:02.873916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:26:02.874489       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:26:02.874556       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:26:02.874696       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:02.874770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.727050       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:03.727102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.826001       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:26:03.826195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:26:03.828079       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:26:03.828158       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:26:03.831615       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:26:03.831670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:26:03.846710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:26:03.846746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:26:03.917086       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:26:03.917147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:26:03.937446       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:26:03.937540       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:26:05.971865       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:20:54 UTC, end at Tue 2022-05-31 18:27:24 UTC. --
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692204    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b45cc5ed-1c03-4907-8209-1b9fa4dc5f17-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-8kcn7\" (UID: \"b45cc5ed-1c03-4907-8209-1b9fa4dc5f17\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8kcn7"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692220    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdg54\" (UniqueName: \"kubernetes.io/projected/b45cc5ed-1c03-4907-8209-1b9fa4dc5f17-kube-api-access-sdg54\") pod \"kubernetes-dashboard-8469778f77-8kcn7\" (UID: \"b45cc5ed-1c03-4907-8209-1b9fa4dc5f17\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8kcn7"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692235    7012 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtszs\" (UniqueName: \"kubernetes.io/projected/2396aa61-2370-4463-a547-ab35598222fd-kube-api-access-rtszs\") pod \"metrics-server-b955d9d8-6g9pv\" (UID: \"2396aa61-2370-4463-a547-ab35598222fd\") " pod="kube-system/metrics-server-b955d9d8-6g9pv"
	May 31 18:27:17 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:17.692255    7012 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.106352    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.264012    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.464121    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:18.659736    7012 request.go:665] Waited for 1.03703191s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.663936    7012 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220531111947-2169\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220531111947-2169"
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794361    7012 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794403    7012 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794440    7012 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20224d90-4fbc-4797-a5d1-b74e0f14966c-config-volume podName:20224d90-4fbc-4797-a5d1-b74e0f14966c nodeName:}" failed. No retries permitted until 2022-05-31 18:27:19.294416721 +0000 UTC m=+2.991220061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20224d90-4fbc-4797-a5d1-b74e0f14966c-config-volume") pod "coredns-64897985d-8gl2g" (UID: "20224d90-4fbc-4797-a5d1-b74e0f14966c") : failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:18 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:18.794454    7012 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-proxy podName:650d3c7e-b8a2-4b30-a0fd-9304c714dbeb nodeName:}" failed. No retries permitted until 2022-05-31 18:27:19.294447271 +0000 UTC m=+2.991250606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/650d3c7e-b8a2-4b30-a0fd-9304c714dbeb-kube-proxy") pod "kube-proxy-qcdzt" (UID: "650d3c7e-b8a2-4b30-a0fd-9304c714dbeb") : failed to sync configmap cache: timed out waiting for the condition
	May 31 18:27:19 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:19.772012    7012 scope.go:110] "RemoveContainer" containerID="63d6a777ef6a9c8f8acf4bf28a41b826246dfae33f578abe10731adbf49ab64b"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: W0531 18:27:20.003251    7012 container.go:489] Failed to get RecentStats("/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4411164_2a83_42f6_97cc_ea5daad54620.slice/docker-2de1e18d40ead2d8eca3ab265b48b371512bd10fdb702de89561d9b9d325e9e4.scope") while determining the next housekeeping: unable to find data in memory cache
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:20.473577    7012 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:20.473666    7012 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:20.473837    7012 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rtszs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},
TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-6g9pv_kube-system(2396aa61-2370-4463-a547-ab35598222fd): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:20.473930    7012 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-6g9pv" podUID=2396aa61-2370-4463-a547-ab35598222fd
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:20.637931    7012 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-58jxt through plugin: invalid network status for"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:20.643937    7012 scope.go:110] "RemoveContainer" containerID="63d6a777ef6a9c8f8acf4bf28a41b826246dfae33f578abe10731adbf49ab64b"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:20.644141    7012 scope.go:110] "RemoveContainer" containerID="2de1e18d40ead2d8eca3ab265b48b371512bd10fdb702de89561d9b9d325e9e4"
	May 31 18:27:20 default-k8s-different-port-20220531111947-2169 kubelet[7012]: E0531 18:27:20.644272    7012 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-58jxt_kubernetes-dashboard(b4411164-2a83-42f6-97cc-ea5daad54620)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-58jxt" podUID=b4411164-2a83-42f6-97cc-ea5daad54620
	May 31 18:27:21 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:21.653923    7012 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-58jxt through plugin: invalid network status for"
	May 31 18:27:22 default-k8s-different-port-20220531111947-2169 kubelet[7012]: I0531 18:27:22.064157    7012 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	
	* 
	* ==> kubernetes-dashboard [ed4ea52ccc3d] <==
	* 2022/05/31 18:26:29 Using namespace: kubernetes-dashboard
	2022/05/31 18:26:29 Using in-cluster config to connect to apiserver
	2022/05/31 18:26:29 Using secret token for csrf signing
	2022/05/31 18:26:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 18:26:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 18:26:29 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 18:26:29 Generating JWE encryption key
	2022/05/31 18:26:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 18:26:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 18:26:29 Initializing JWE encryption key from synchronized object
	2022/05/31 18:26:29 Creating in-cluster Sidecar client
	2022/05/31 18:26:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:26:29 Serving insecurely on HTTP port: 9090
	2022/05/31 18:27:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 18:26:29 Starting overwatch
	
	* 
	* ==> storage-provisioner [03c140064443] <==
	* I0531 18:26:21.398008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:26:21.405747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:26:21.405826       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:26:21.438225       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:26:21.438303       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"123381d7-0af9-4b94-9365-c6c34f06ee85", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e became leader
	I0531 18:26:21.438566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e!
	I0531 18:26:21.538719       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220531111947-2169_fdc7e13b-190a-406d-80dc-158d6eec536e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-6g9pv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv: exit status 1 (292.237595ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-6g9pv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531111947-2169 describe pod metrics-server-b955d9d8-6g9pv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (49.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220531112729-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
E0531 11:28:46.305417    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169: exit status 2 (16.092291579s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169: exit status 2 (16.102880006s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220531112729-2169 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220531112729-2169
helpers_test.go:235: (dbg) docker inspect newest-cni-20220531112729-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1",
	        "Created": "2022-05-31T18:27:36.034608276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:28:21.80880565Z",
	            "FinishedAt": "2022-05-31T18:28:19.877911414Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/hosts",
	        "LogPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1-json.log",
	        "Name": "/newest-cni-20220531112729-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220531112729-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220531112729-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220531112729-2169",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220531112729-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220531112729-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220531112729-2169",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220531112729-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dccacc59ed3bc8a780c7bd816f97ebe7d4b39df641369f3464d12f665ed8586e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dccacc59ed3b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220531112729-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4d4fcac3a251",
	                        "newest-cni-20220531112729-2169"
	                    ],
	                    "NetworkID": "147c62ffd7f8eb5bf4dc44f7cfdec6e219304c41f5644968d9079ed6e2aefb26",
	                    "EndpointID": "3627c087c717fc20bcf6601407ec30b2bee4abbc24d0b2103429ab43e9d3e21d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220531112729-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220531112729-2169 logs -n 25: (4.643776195s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | embed-certs-20220531111208-2169                            | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                            |                                                |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                            |                                                |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220531111946-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | disable-driver-mounts-20220531111946-2169                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169                        | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:28:20
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:28:20.578170   14601 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:28:20.578344   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578349   14601 out.go:309] Setting ErrFile to fd 2...
	I0531 11:28:20.578353   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578450   14601 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:28:20.578728   14601 out.go:303] Setting JSON to false
	I0531 11:28:20.593905   14601 start.go:115] hostinfo: {"hostname":"37309.local","uptime":5269,"bootTime":1654016431,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:28:20.594000   14601 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:28:20.616053   14601 out.go:177] * [newest-cni-20220531112729-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:28:20.657488   14601 notify.go:193] Checking for updates...
	I0531 11:28:20.678853   14601 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:28:20.700919   14601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:20.721904   14601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:28:20.744090   14601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:28:20.766040   14601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:28:20.788411   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:20.789066   14601 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:28:20.860555   14601 docker.go:137] docker version: linux-20.10.14
	I0531 11:28:20.860683   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:20.986144   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:20.919865429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.008666   14601 out.go:177] * Using the docker driver based on existing profile
	I0531 11:28:21.030286   14601 start.go:284] selected driver: docker
	I0531 11:28:21.030310   14601 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.030459   14601 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:28:21.033857   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:21.157996   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:21.093562365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.158200   14601 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 11:28:21.158219   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:21.158227   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:21.158238   14601 start_flags.go:306] config:
	{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.201959   14601 out.go:177] * Starting control plane node newest-cni-20220531112729-2169 in cluster newest-cni-20220531112729-2169
	I0531 11:28:21.224000   14601 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:28:21.245702   14601 out.go:177] * Pulling base image ...
	I0531 11:28:21.287911   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:21.287945   14601 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:28:21.288004   14601 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:28:21.288030   14601 cache.go:57] Caching tarball of preloaded images
	I0531 11:28:21.288213   14601 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:28:21.288233   14601 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:28:21.289361   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.352247   14601 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:28:21.352265   14601 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:28:21.352276   14601 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:28:21.352353   14601 start.go:352] acquiring machines lock for newest-cni-20220531112729-2169: {Name:mk223b02c8d18fd8125fc1aec4677c6b6e6ebb27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:28:21.352426   14601 start.go:356] acquired machines lock for "newest-cni-20220531112729-2169" in 55.579µs
	I0531 11:28:21.352446   14601 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:28:21.352452   14601 fix.go:55] fixHost starting: 
	I0531 11:28:21.352679   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.419750   14601 fix.go:103] recreateIfNeeded on newest-cni-20220531112729-2169: state=Stopped err=<nil>
	W0531 11:28:21.419776   14601 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:28:21.441735   14601 out.go:177] * Restarting existing docker container for "newest-cni-20220531112729-2169" ...
	I0531 11:28:21.463797   14601 cli_runner.go:164] Run: docker start newest-cni-20220531112729-2169
	I0531 11:28:21.814105   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.885658   14601 kic.go:416] container "newest-cni-20220531112729-2169" state is running.
	I0531 11:28:21.886206   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:21.959677   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.960076   14601 machine.go:88] provisioning docker machine ...
	I0531 11:28:21.960098   14601 ubuntu.go:169] provisioning hostname "newest-cni-20220531112729-2169"
	I0531 11:28:21.960175   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.032376   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.032565   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.032580   14601 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531112729-2169 && echo "newest-cni-20220531112729-2169" | sudo tee /etc/hostname
	I0531 11:28:22.157035   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531112729-2169
	
	I0531 11:28:22.157115   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.228089   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.228232   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.228247   14601 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531112729-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531112729-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531112729-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:28:22.339894   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:22.339918   14601 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:28:22.339945   14601 ubuntu.go:177] setting up certificates
	I0531 11:28:22.339961   14601 provision.go:83] configureAuth start
	I0531 11:28:22.340025   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:22.411484   14601 provision.go:138] copyHostCerts
	I0531 11:28:22.411577   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:28:22.411587   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:28:22.411674   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:28:22.411878   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:28:22.411888   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:28:22.411944   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:28:22.412077   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:28:22.412083   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:28:22.412138   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:28:22.412247   14601 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531112729-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531112729-2169]
	I0531 11:28:22.494505   14601 provision.go:172] copyRemoteCerts
	I0531 11:28:22.494581   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:28:22.494633   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.566548   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:22.647934   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:28:22.667800   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:28:22.686536   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:28:22.706691   14601 provision.go:86] duration metric: configureAuth took 366.717286ms
	I0531 11:28:22.706707   14601 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:28:22.706872   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:22.706929   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.778451   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.778594   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.778608   14601 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:28:22.890950   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:28:22.890970   14601 ubuntu.go:71] root file system type: overlay
	I0531 11:28:22.891144   14601 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:28:22.891228   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.963222   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.963394   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.963443   14601 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:28:23.086363   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:28:23.086459   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.156610   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:23.156758   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:23.156784   14601 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:28:23.275544   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:23.275558   14601 machine.go:91] provisioned docker machine in 1.315489714s
	I0531 11:28:23.275564   14601 start.go:306] post-start starting for "newest-cni-20220531112729-2169" (driver="docker")
	I0531 11:28:23.275568   14601 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:28:23.275635   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:28:23.275687   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.345259   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.429063   14601 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:28:23.432446   14601 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:28:23.432461   14601 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:28:23.432468   14601 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:28:23.432475   14601 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:28:23.432482   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:28:23.432621   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:28:23.432759   14601 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:28:23.432905   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:28:23.439726   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:23.456662   14601 start.go:309] post-start completed in 181.091671ms
	I0531 11:28:23.456739   14601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:28:23.456786   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.526742   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.605692   14601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:28:23.610546   14601 fix.go:57] fixHost completed within 2.258116486s
	I0531 11:28:23.610566   14601 start.go:81] releasing machines lock for "newest-cni-20220531112729-2169", held for 2.258159111s
	I0531 11:28:23.610672   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:23.680718   14601 ssh_runner.go:195] Run: systemctl --version
	I0531 11:28:23.680719   14601 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:28:23.680772   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.680795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.754229   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.757054   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.836240   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:28:23.968614   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:23.978398   14601 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:28:23.978455   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:28:23.987743   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:28:24.000522   14601 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:28:24.067960   14601 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:28:24.135864   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:24.145523   14601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:28:24.212934   14601 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:28:24.222595   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.257967   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.335762   14601 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:28:24.335888   14601 cli_runner.go:164] Run: docker exec -t newest-cni-20220531112729-2169 dig +short host.docker.internal
	I0531 11:28:24.460335   14601 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:28:24.460445   14601 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:28:24.464822   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.475293   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:24.566562   14601 out.go:177]   - kubelet.network-plugin=cni
	I0531 11:28:24.588831   14601 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 11:28:24.610772   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:24.610916   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.642456   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.642471   14601 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:28:24.642549   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.671595   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.671615   14601 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:28:24.671707   14601 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:28:24.745107   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:24.745118   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:24.745131   14601 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 11:28:24.745142   14601 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531112729-2169 NodeName:newest-cni-20220531112729-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:28:24.745273   14601 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220531112729-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:28:24.745338   14601 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531112729-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:28:24.745395   14601 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:28:24.752959   14601 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:28:24.753032   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:28:24.759894   14601 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0531 11:28:24.772209   14601 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:28:24.784449   14601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0531 11:28:24.796924   14601 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:28:24.800433   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.809821   14601 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169 for IP: 192.168.58.2
	I0531 11:28:24.809929   14601 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:28:24.810011   14601 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:28:24.810092   14601 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/client.key
	I0531 11:28:24.810156   14601 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key.cee25041
	I0531 11:28:24.810205   14601 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key
	I0531 11:28:24.810423   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:28:24.810461   14601 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:28:24.810473   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:28:24.810508   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:28:24.810539   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:28:24.810574   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:28:24.810635   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:24.811155   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:28:24.827721   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:28:24.844468   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:28:24.861175   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:28:24.878420   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:28:24.896393   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:28:24.913732   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:28:24.930651   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:28:24.947273   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:28:24.963888   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:28:24.980969   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:28:24.998182   14601 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:28:25.010512   14601 ssh_runner.go:195] Run: openssl version
	I0531 11:28:25.015678   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:28:25.023240   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026950   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026984   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.032002   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:28:25.039256   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:28:25.046930   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050635   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050678   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.055739   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:28:25.062867   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:28:25.070401   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074092   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074134   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.079290   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:28:25.086508   14601 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:25.086608   14601 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:25.115424   14601 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:28:25.123088   14601 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:28:25.123106   14601 kubeadm.go:626] restartCluster start
	I0531 11:28:25.123166   14601 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:28:25.130286   14601 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.130356   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:25.201430   14601 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531112729-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:25.201614   14601 kubeconfig.go:127] "newest-cni-20220531112729-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:28:25.202983   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:25.204253   14601 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:28:25.211900   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.211944   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.220060   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.422181   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.422379   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.433780   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.620174   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.620309   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.632389   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.820588   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.820750   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.831459   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.022339   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.022452   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.032812   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.222206   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.222338   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.233015   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.422196   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.422320   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.432962   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.620439   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.620525   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.629706   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.820697   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.820806   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.831264   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.022187   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.022343   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.032905   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.220251   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.220391   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.230734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.420963   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.421067   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.430383   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.621762   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.621857   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.632734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.820676   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.820772   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.831490   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.022170   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.022312   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.033000   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.221452   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.221582   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.232311   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.232323   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.232368   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.240337   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.240351   14601 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:28:28.240362   14601 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:28:28.240419   14601 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:28.270566   14601 docker.go:442] Stopping containers: [b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5]
	I0531 11:28:28.270636   14601 ssh_runner.go:195] Run: docker stop b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5
	I0531 11:28:28.300361   14601 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:28:28.310648   14601 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:28:28.318184   14601 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 18:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 18:27 /etc/kubernetes/scheduler.conf
	
	I0531 11:28:28.318233   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:28:28.325358   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:28:28.332511   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.339617   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.339668   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.346531   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:28:28.353553   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.353596   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:28:28.360560   14601 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367876   14601 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367886   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:28.411595   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.400253   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.531124   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.579235   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.634030   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:29.634095   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.148677   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.646607   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.665446   14601 api_server.go:71] duration metric: took 1.031430401s to wait for apiserver process to appear ...
	I0531 11:28:30.665473   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:30.665491   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:30.666674   14601 api_server.go:256] stopped: https://127.0.0.1:55181/healthz: Get "https://127.0.0.1:55181/healthz": EOF
	I0531 11:28:31.168738   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.658657   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.658673   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:33.667080   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.676399   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.676422   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:34.166979   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.174133   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.174146   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:34.667060   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.672889   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.672907   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:35.166970   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.172997   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.179538   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.179550   14601 api_server.go:130] duration metric: took 4.514120757s to wait for apiserver health ...
	I0531 11:28:35.179559   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:35.179568   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:35.179579   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.186211   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.186226   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.186231   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.186234   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.186238   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running
	I0531 11:28:35.186244   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.186249   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.186256   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.186260   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.186263   14601 system_pods.go:74] duration metric: took 6.680457ms to wait for pod list to return data ...
	I0531 11:28:35.186268   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.188933   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.188950   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.188962   14601 node_conditions.go:105] duration metric: took 2.690302ms to run NodePressure ...
	I0531 11:28:35.188973   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:35.352632   14601 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:28:35.361125   14601 ops.go:34] apiserver oom_adj: -16
	I0531 11:28:35.361143   14601 kubeadm.go:630] restartCluster took 10.238154537s
	I0531 11:28:35.361151   14601 kubeadm.go:397] StartCluster complete in 10.274772238s
	I0531 11:28:35.361170   14601 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.361244   14601 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:35.361875   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.364955   14601 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531112729-2169" rescaled to 1
	I0531 11:28:35.364987   14601 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:28:35.441880   14601 out.go:177] * Verifying Kubernetes components...
	I0531 11:28:35.365003   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:28:35.365025   14601 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:28:35.365144   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:35.442135   14601 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.442144   14601 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479754   14601 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479769   14601 addons.go:165] addon metrics-server should already be in state true
	I0531 11:28:35.479783   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:28:35.479760   14601 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479821   14601 addons.go:165] addon dashboard should already be in state true
	I0531 11:28:35.479822   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442127   14601 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479852   14601 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.479855   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442146   14601 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531112729-2169"
	W0531 11:28:35.479865   14601 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:28:35.479883   14601 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531112729-2169"
	I0531 11:28:35.479911   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.480183   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480218   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480300   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.481040   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.525057   14601 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 11:28:35.525155   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.676584   14601 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.624235   14601 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.639696   14601 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:28:35.713827   14601 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0531 11:28:35.676634   14601 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:28:35.731542   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:35.751936   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.752094   14601 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:35.811040   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:28:35.849003   14601 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.811081   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:35.811141   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:28:35.811765   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.849157   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886646   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:28:35.886796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:28:35.886795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886823   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:28:35.886947   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.907168   14601 api_server.go:71] duration metric: took 542.159545ms to wait for apiserver process to appear ...
	I0531 11:28:35.907219   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:35.907267   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.920781   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.923207   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.923240   14601 api_server.go:130] duration metric: took 16.012254ms to wait for apiserver health ...
	I0531 11:28:35.923248   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.933658   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.933689   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.933698   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.933710   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.933728   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:28:35.933736   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.933747   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.933759   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.933779   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.933786   14601 system_pods.go:74] duration metric: took 10.533198ms to wait for pod list to return data ...
	I0531 11:28:35.933792   14601 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:28:35.938145   14601 default_sa.go:45] found service account: "default"
	I0531 11:28:35.938165   14601 default_sa.go:55] duration metric: took 4.366593ms for default service account to be created ...
	I0531 11:28:35.938197   14601 kubeadm.go:572] duration metric: took 573.199171ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 11:28:35.938223   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.942426   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.942450   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.942465   14601 node_conditions.go:105] duration metric: took 4.236351ms to run NodePressure ...
	I0531 11:28:35.942485   14601 start.go:213] waiting for startup goroutines ...
	I0531 11:28:36.012965   14601 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.012980   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:28:36.013037   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:36.013049   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.013580   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.015074   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.092243   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.148273   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:36.245789   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:28:36.245817   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:28:36.247745   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:28:36.247758   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:28:36.345201   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:28:36.345217   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:28:36.345894   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.348010   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:28:36.348023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:28:36.433009   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:28:36.433023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:28:36.436199   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.436215   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:28:36.458750   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:28:36.458764   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:28:36.460817   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.555796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:28:36.555811   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:28:36.660576   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:28:36.660591   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:28:36.746397   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:28:36.746413   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:28:36.762642   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:28:36.762659   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:28:36.779687   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:36.779700   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:28:36.851105   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:37.356022   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.207732118s)
	I0531 11:28:37.356099   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010190378s)
	I0531 11:28:37.447297   14601 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531112729-2169"
	I0531 11:28:37.650818   14601 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:28:37.709272   14601 addons.go:417] enableAddons completed in 2.34427737s
	I0531 11:28:37.742397   14601 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:28:37.763847   14601 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531112729-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:28:21 UTC, end at Tue 2022-05-31 18:29:14 UTC. --
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.567029468Z" level=info msg="Removing stale sandbox 2a0b990a9e2ad6d75f364afe20e8118d3efa2664e5e6fb6945382bc0265d7da8 (963a9454c02629f16e78c70841dea207911e3bf803f1fe050ca74d9f6978ec51)"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.568406058Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4900b215c73fd9f1c54a8ea8f7a48401292d11fea73679baa164924b87802490 89d27be32d8ba01ffa926bb2fa7ee34be29c49cf8236451d8767e2b187e6d1d6], retrying...."
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.662168520Z" level=info msg="Removing stale sandbox 2ee8914ea0126c701881278d930fa5d78932b3085c89483f37487176d279e6d2 (02136fcb6f2a8c6d04404dd94d54872f3067a195c2549f918b5d7ecd6190d6ea)"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.663366599Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4900b215c73fd9f1c54a8ea8f7a48401292d11fea73679baa164924b87802490 8a904fdf3543b501e76e90d6150e0b440f3dc256e754d7a632752854930e6cb3], retrying...."
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.754124369Z" level=info msg="Removing stale sandbox 5db2020e5a963476854e4ecd56f785da61432dadff88bdc20a07df54095b54dd (9b9f23fa412f9e5620a63c8bd74b20ff6ecc97fcfac81318faf9902f0029d21e)"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.755451039Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4900b215c73fd9f1c54a8ea8f7a48401292d11fea73679baa164924b87802490 95b7dc7f825b0914f1a8d76e7d146e4b6394107c7753cbd67789ac7eada88c76], retrying...."
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.843631844Z" level=info msg="Removing stale sandbox 817bdcb1635db7a796453e0dd9db54cc157a2c73a06128476738b571e4e52c16 (432c9954381c07446e6ca83cf5874bf8d730a443a369125c32694c27d9d51576)"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.844863864Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint a60cda09953e19ddec70569b356f827d05d535acc5988b9e7bfbc8d1405e5ebf 0fbfcee7b91651fd66f18c98a49c1d62678be4a9a431093c03b48d5bd637461d], retrying...."
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.847038311Z" level=info msg="Removing stale endpoint k8s_POD_metrics-server-b955d9d8-4nh24_kube-system_d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5_0 (a85ff69691c24024612cfb519897d6d6ecc01a72dd983c85fcdd0d5f0cc0c894)"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.871987189Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.914046293Z" level=info msg="Loading containers: done."
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.931665052Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.931738961Z" level=info msg="Daemon has completed initialization"
	May 31 18:28:22 newest-cni-20220531112729-2169 systemd[1]: Started Docker Application Container Engine.
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.953971174Z" level=info msg="API listen on [::]:2376"
	May 31 18:28:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:22.959053999Z" level=info msg="API listen on /var/run/docker.sock"
	May 31 18:28:35 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:35.382976690Z" level=info msg="ignoring event" container=b34eee5b99883b6040a285b792c1655886c30e343477ade1ab1e51cec6ca88f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:36 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:36.870873846Z" level=info msg="ignoring event" container=30c3d2e8afd52e9fb9fd912e93e76e49cf6dc704001f98b5c491577f3bdcc168 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:36 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:36.959156799Z" level=info msg="ignoring event" container=765799c2459ca41f5501e30b95a0cce60ac2e9123180f70cf241c7ee38cd162e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:37 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:37.948289358Z" level=info msg="ignoring event" container=ee4e021747c1572cf4b83e02eba5f1014c319947a9f1b1cf3f0c4510b2933187 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:37 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:37.960264411Z" level=info msg="ignoring event" container=843ad3c310b302956c21c2f0e07ec83d986ec592a94511483125c282517e6dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:38 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:38.565280112Z" level=info msg="ignoring event" container=02d141ba5ebb75c91a9175882b44ad61f73d64d3986c63a68ae52b397a72e4df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:38 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:38.572933138Z" level=info msg="ignoring event" container=0a0ea6091e5de168ef702e9b35779f35b158c42743ecbe7dc1a5f6e636916459 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:28:39 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:39.541001799Z" level=error msg="Handler for GET /v1.41/containers/7a47e564e7ac08813dcecfa1b75dde332695bfd626e0f2c84938334ce5236a5e/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	May 31 18:28:39 newest-cni-20220531112729-2169 dockerd[130]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3fd4cdf3a5fd7       6e38f40d628db       39 seconds ago       Running             storage-provisioner       1                   2c0de734fef36
	93f13d37d4785       4c03754524064       40 seconds ago       Running             kube-proxy                1                   8fa0dcd0b8612
	77a22cc95c9a9       25f8c7f3da61c       45 seconds ago       Running             etcd                      1                   84ceeb1af20d5
	962b034301273       595f327f224a4       45 seconds ago       Running             kube-scheduler            1                   4f032b43a8599
	531701230de4c       df7b72818ad2e       45 seconds ago       Running             kube-controller-manager   1                   08b8f59a5f4ba
	9564d7e881212       8fa62c12256df       45 seconds ago       Running             kube-apiserver            1                   77ad11efde503
	7c076965981f7       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   963a9454c0262
	0f95a6838cd9a       4c03754524064       About a minute ago   Exited              kube-proxy                0                   02136fcb6f2a8
	f103292226f66       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   b84f3422d4f34
	78ffb0ab7dc51       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   9b9f23fa412f9
	7685bdfe22590       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   c5d361a450c54
	c2c4289070e65       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   53615169312d5
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220531112729-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220531112729-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=newest-cni-20220531112729-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_27_52_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:27:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220531112729-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:29:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:29:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220531112729-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                270d6860-d295-4a39-8bcb-83c3e922fb10
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-m9wpk                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-newest-cni-20220531112729-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-newest-cni-20220531112729-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-newest-cni-20220531112729-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-rml7v                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-newest-cni-20220531112729-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-b955d9d8-4nh24                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-r6z52                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-8v2px                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 39s                kube-proxy  
	  Normal  Starting                 70s                kube-proxy  
	  Normal  NodeHasSufficientMemory  83s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 83s                kubelet     Starting kubelet.
	  Normal  NodeReady                73s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeReady
	  Normal  Starting                 46s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x7 over 46s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x7 over 46s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  46s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     46s (x7 over 46s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node newest-cni-20220531112729-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node newest-cni-20220531112729-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [77a22cc95c9a] <==
	* {"level":"info","ts":"2022-05-31T18:28:30.767Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-05-31T18:28:30.767Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-05-31T18:28:30.775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:28:30.775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:28:30.775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:28:30.775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:28:30.776Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:28:30.776Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:30.776Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:30.777Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:28:30.777Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531112729-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:28:32.270Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [f103292226f6] <==
	* {"level":"info","ts":"2022-05-31T18:27:46.285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531112729-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:27:46.287Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:27:46.287Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:28:07.726Z","caller":"traceutil/trace.go:171","msg":"trace[829804833] linearizableReadLoop","detail":"{readStateIndex:514; appliedIndex:514; }","duration":"187.328642ms","start":"2022-05-31T18:28:07.539Z","end":"2022-05-31T18:28:07.726Z","steps":["trace[829804833] 'read index received'  (duration: 187.322273ms)","trace[829804833] 'applied index is now lower than readState.Index'  (duration: 5.364µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T18:28:07.729Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"190.525408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cssvj\" ","response":"range_response_count:1 size:4348"}
	{"level":"info","ts":"2022-05-31T18:28:07.729Z","caller":"traceutil/trace.go:171","msg":"trace[68716722] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cssvj; range_end:; response_count:1; response_revision:500; }","duration":"190.716622ms","start":"2022-05-31T18:28:07.539Z","end":"2022-05-31T18:28:07.729Z","steps":["trace[68716722] 'agreement among raft nodes before linearized reading'  (duration: 187.461455ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T18:28:07.729Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.056009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2022-05-31T18:28:07.730Z","caller":"traceutil/trace.go:171","msg":"trace[954045274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:500; }","duration":"165.325284ms","start":"2022-05-31T18:28:07.564Z","end":"2022-05-31T18:28:07.729Z","steps":["trace[954045274] 'agreement among raft nodes before linearized reading'  (duration: 162.105561ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:28:08.091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-31T18:28:08.091Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220531112729-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/05/31 18:28:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/31 18:28:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-31T18:28:08.098Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-05-31T18:28:08.099Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:08.100Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:08.100Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220531112729-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:29:16 up  1:17,  0 users,  load average: 1.82, 1.12, 1.08
	Linux newest-cni-20220531112729-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9564d7e88121] <==
	* I0531 18:28:33.753074       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:28:33.754281       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 18:28:33.755371       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:28:33.755901       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:28:33.770198       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 18:28:33.776979       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:28:34.653297       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:28:34.653363       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:28:34.656797       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0531 18:28:34.780506       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:28:34.780653       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:28:34.780671       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:28:35.274633       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:28:35.281023       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:28:35.311609       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:28:35.352482       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:28:35.357506       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:28:35.929988       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:28:37.446106       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 18:28:37.570738       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.221.14]
	I0531 18:28:37.580320       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.157.141]
	I0531 18:29:12.633765       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:29:13.775999       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:29:14.074709       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [c2c4289070e6] <==
	* W0531 18:28:09.096294       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096316       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096319       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096338       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096351       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096399       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096418       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096451       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096479       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096492       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096521       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096541       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096542       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096547       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096574       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096577       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096577       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096580       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096592       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096604       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096612       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096611       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096617       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096659       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [531701230de4] <==
	* I0531 18:29:13.728386       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 18:29:13.742376       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0531 18:29:13.749471       1 shared_informer.go:247] Caches are synced for taint 
	I0531 18:29:13.749557       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0531 18:29:13.749574       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 18:29:13.749621       1 event.go:294] "Event occurred" object="newest-cni-20220531112729-2169" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220531112729-2169 event: Registered Node newest-cni-20220531112729-2169 in Controller"
	W0531 18:29:13.749604       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220531112729-2169. Assuming now as a timestamp.
	I0531 18:29:13.749660       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 18:29:13.749772       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 18:29:13.771177       1 shared_informer.go:247] Caches are synced for TTL 
	I0531 18:29:13.777515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0531 18:29:13.778576       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:29:13.822879       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 18:29:13.825297       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:29:13.832435       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:29:13.834952       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 18:29:13.845936       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 18:29:13.847074       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 18:29:13.857359       1 shared_informer.go:247] Caches are synced for expand 
	I0531 18:29:13.873526       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 18:29:14.129146       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-8v2px"
	I0531 18:29:14.133358       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-r6z52"
	I0531 18:29:14.240593       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:29:14.240628       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:29:14.243130       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [7685bdfe2259] <==
	* I0531 18:28:04.273542       1 shared_informer.go:247] Caches are synced for PV protection 
	I0531 18:28:04.285172       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:28:04.298828       1 shared_informer.go:247] Caches are synced for taint 
	I0531 18:28:04.299004       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 18:28:04.299054       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220531112729-2169. Assuming now as a timestamp.
	I0531 18:28:04.299092       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 18:28:04.299292       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 18:28:04.299382       1 event.go:294] "Event occurred" object="newest-cni-20220531112729-2169" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220531112729-2169 event: Registered Node newest-cni-20220531112729-2169 in Controller"
	I0531 18:28:04.321145       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 18:28:04.327204       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:28:04.370708       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 18:28:04.370776       1 disruption.go:371] Sending events to api server.
	I0531 18:28:04.744877       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:28:04.794284       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:28:04.794345       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:28:04.878999       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rml7v"
	I0531 18:28:05.027228       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 18:28:05.113311       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 18:28:05.125701       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cssvj"
	I0531 18:28:05.129589       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-m9wpk"
	I0531 18:28:05.142073       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-cssvj"
	I0531 18:28:07.285742       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:28:07.289244       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0531 18:28:07.293150       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0531 18:28:07.298593       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-4nh24"
	
	* 
	* ==> kube-proxy [0f95a6838cd9] <==
	* I0531 18:28:05.480309       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:28:05.480375       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:28:05.480416       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:28:05.503214       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:28:05.503272       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:28:05.503280       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:28:05.503295       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:28:05.503750       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:28:05.504660       1 config.go:317] "Starting service config controller"
	I0531 18:28:05.504735       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:28:05.504770       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:28:05.504776       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:28:05.605592       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:28:05.605621       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [93f13d37d478] <==
	* I0531 18:28:35.669525       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:28:35.669579       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:28:35.669600       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:28:35.917077       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:28:35.917238       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:28:35.917486       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:28:35.917544       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:28:35.921268       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:28:35.922561       1 config.go:317] "Starting service config controller"
	I0531 18:28:35.922606       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:28:35.922628       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:28:35.922631       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:28:36.038270       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:28:36.038340       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [78ffb0ab7dc5] <==
	* W0531 18:27:49.193282       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:27:49.194233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:27:49.193268       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:27:49.194245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:27:49.194070       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:27:49.194231       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:27:49.194252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:27:49.194327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:27:49.195317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:27:49.195359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:27:50.076759       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:27:50.076815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:27:50.148554       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:27:50.148608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:27:50.233409       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:27:50.233425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:27:50.265479       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:27:50.265521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:27:50.307502       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:27:50.307545       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0531 18:27:50.690212       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0531 18:27:51.479213       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 18:28:08.092864       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0531 18:28:08.092960       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 18:28:08.094754       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [962b03430127] <==
	* W0531 18:28:30.747455       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0531 18:28:31.549409       1 serving.go:348] Generated self-signed cert in-memory
	W0531 18:28:33.692901       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 18:28:33.692995       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:28:33.693015       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 18:28:33.693027       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 18:28:33.746557       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 18:28:33.747771       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 18:28:33.747859       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 18:28:33.747887       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:28:33.748369       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 18:28:33.848573       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:28:21 UTC, end at Tue 2022-05-31 18:29:18 UTC. --
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.271792    3689 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-0ae2a24cfc10e7be3ea03448 -m comment --comment name: \"crio\" id: \"42193d4bf25a9c98452119ae1287
211a8b1c2af714d0fb20a4f7ff3aa2148ed9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0ae2a24cfc10e7be3ea03448':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.271846    3689 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard(5d039dd7-c288-4ae2-aaca-313ecf1c364f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard(5d039dd7-c288-4ae2-aaca-313ecf1c364f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-r6z52\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-r6z52\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-0ae2a24cfc10e7be3ea03448 -m comment --comment name: \\\"crio\\\" id: \\\"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0ae2a24cfc10e7be3ea03448':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podUID=5d039dd7-c288-4ae2-aaca-313ecf1c364f
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.273749    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f1024d52026a289d8a3bfd3f -m comment --comment name: \"crio\" id: \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f1024d52026a289d8a3bfd3f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px" podSandboxID={Type:docker ID:cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308} podNetnsPath="/proc/4720/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.533822    3689 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f1024d52026a289d8a3bfd3f -m comment --comment name: \"crio\" id: \"cad8a77063295496f65d09d8cfcf17864785caef252
8bd1d1eeeea7e88fdb308\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f1024d52026a289d8a3bfd3f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.533886    3689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f1024d52026a289d8a3bfd3f -m comment --comment name: \"crio\" id: \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d
1eeeea7e88fdb308\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f1024d52026a289d8a3bfd3f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.533909    3689 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\" network for pod \"kubernetes-dashboard-8469778f77-8v2px\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f1024d52026a289d8a3bfd3f -m comment --comment name: \"crio\" id: \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d
1eeeea7e88fdb308\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f1024d52026a289d8a3bfd3f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:17.533963    3689 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard(cee8ef43-4fd0-437f-bf62-60fb92d0aa01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard(cee8ef43-4fd0-437f-bf62-60fb92d0aa01)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\\\" network for pod \\\"kubernetes-dashboard-8469778f77-8v2px\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\\\" network for pod \\\"kubernetes-dashboard-8469
778f77-8v2px\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f1024d52026a289d8a3bfd3f -m comment --comment name: \\\"crio\\\" id: \\\"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f1024d52026a289d8a3bfd3f':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px" podUID=cee8ef43-4fd0-437f-bf62-60fb92d0aa01
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.666138    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\""
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.666803    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.668434    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9\""
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.669667    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-4nh24_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"acfd20649c72c2c188ef6dd8c75040a64d1dd89976699032b0db83261e520a1f\""
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.671282    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="acfd20649c72c2c188ef6dd8c75040a64d1dd89976699032b0db83261e520a1f"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.673545    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"acfd20649c72c2c188ef6dd8c75040a64d1dd89976699032b0db83261e520a1f\""
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.673670    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\""
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.676287    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3c06ca5cd9628b379e317f8b85557b21f22e6dba99c59d0ddf80c21054643ed2"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.676323    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308"
	May 31 18:29:17 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:17.678181    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308\""
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.337135    3689 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podSandboxID={Type:docker ID:9532d06b528ca7839562ebfea5795d5ae9d8b980a8e54db4ccf0314cae5582b3} podNetnsPath="/proc/5110/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.337135    3689 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/metrics-server-b955d9d8-4nh24" podSandboxID={Type:docker ID:09074ce331644d1a11bbae43393da1a4627df33fc36576a5ade7bfa17607e3ff} podNetnsPath="/proc/5117/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.345920    3689 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px" podSandboxID={Type:docker ID:f793bd71970e37102fabd6603869afdab5b6f6e4fd70f81a28a9a740beb33411} podNetnsPath="/proc/5124/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.401371    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-b084046c74c9cea7f8f2d6c3 -m comment --comment name: \"crio\" id: \"9532d06b528ca7839562ebfea5795d5ae9d8b980a8e54db4ccf0314cae5582b3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-b084046c74c9cea7f8f2d6c3':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podSandboxID={Type:docker ID:9532d06b528ca7839562ebfea5795d5ae9d8b980a8e54db4ccf0314cae5582b3} podNetnsPath="/proc/5110/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.407508    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-e8686254c7186e4708a1cfae -m comment --comment name: \"crio\" id: \"09074ce331644d1a11bbae43393da1a4627df33fc36576a5ade7bfa17607e3ff\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e8686254c7186e4708a1cfae':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-4nh24" podSandboxID={Type:docker ID:09074ce331644d1a11bbae43393da1a4627df33fc36576a5ade7bfa17607e3ff} podNetnsPath="/proc/5117/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.413536    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-4ce118b5b658b6748e60f3ac -m comment --comment name: \"crio\" id: \"f793bd71970e37102fabd6603869afdab5b6f6e4fd70f81a28a9a740beb33411\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4ce118b5b658b6748e60f3ac':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-8v2px" podSandboxID={Type:docker ID:f793bd71970e37102fabd6603869afdab5b6f6e4fd70f81a28a9a740beb33411} podNetnsPath="/proc/5124/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.416574    3689 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-m9wpk" podSandboxID={Type:docker ID:d9a6428e8c63b75ea5344abfb03c0be4114fe7a6d8240c7bb7fdb6e3611fe4d5} podNetnsPath="/proc/5220/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:18 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:18.453501    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-bf9349759ee5c024b13d5bfa -m comment --comment name: \"crio\" id: \"d9a6428e8c63b75ea5344abfb03c0be4114fe7a6d8240c7bb7fdb6e3611fe4d5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bf9349759ee5c024b13d5bfa':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-m9wpk" podSandboxID={Type:docker ID:d9a6428e8c63b75ea5344abfb03c0be4114fe7a6d8240c7bb7fdb6e3611fe4d5} podNetnsPath="/proc/5220/ns/net" networkType="bridge" networkName="crio"
	
	* 
	* ==> storage-provisioner [3fd4cdf3a5fd] <==
	* I0531 18:28:36.549023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:28:36.562743       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:28:36.562793       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:29:12.635274       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:29:12.635459       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6!
	I0531 18:29:12.636610       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"634329ce-a558-49c1-b9d8-0b4e8eaaae7c", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6 became leader
	I0531 18:29:12.736575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6!
	
	* 
	* ==> storage-provisioner [7c076965981f] <==
	* I0531 18:28:07.032251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:28:07.040601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:28:07.040637       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:28:07.048317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:28:07.048451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c!
	I0531 18:28:07.048751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"634329ce-a558-49c1-b9d8-0b4e8eaaae7c", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c became leader
	I0531 18:28:07.148609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220531112729-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220531112729-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.113463031s)
helpers_test.go:270: non-running pods: coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px: exit status 1 (259.55075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-m9wpk" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-4nh24" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-r6z52" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-8v2px" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220531112729-2169
helpers_test.go:235: (dbg) docker inspect newest-cni-20220531112729-2169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1",
	        "Created": "2022-05-31T18:27:36.034608276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:28:21.80880565Z",
	            "FinishedAt": "2022-05-31T18:28:19.877911414Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/hosts",
	        "LogPath": "/var/lib/docker/containers/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1/4d4fcac3a251b8ef7a33b98a9d619d216989ca66802e3a7f769c3d84cc8290c1-json.log",
	        "Name": "/newest-cni-20220531112729-2169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220531112729-2169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220531112729-2169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31-init/diff:/var/lib/docker/overlay2/e9236ac2c8288edd356d491038859bb7b041caf2af4db686b5a369984cd72c7a/diff:/var/lib/docker/overlay2/9583a36102d597beec28eedc4595cd1894a38b7f2f977136a1db0d9279787834/diff:/var/lib/docker/overlay2/1910392279f776d7119f28b1da556d13ca658b69e1e41b3507098eb04986989a/diff:/var/lib/docker/overlay2/726cfb6c8268ab6b419a127af98543ed0e007dd73abc7657074c803ea3e72111/diff:/var/lib/docker/overlay2/3252ec1ac98e8c7adcbfc1f8dd2e5066559bb420e7590a9a7140a8e4f2616df4/diff:/var/lib/docker/overlay2/7ffeda290a1ddaf5f37f463abd306472ef3996f100e9d877ea37dfeb7bac3305/diff:/var/lib/docker/overlay2/67deff9d213ce915a44ed74b4258db8724cae9fb1c4350a3e0b0992c78f1e1cb/diff:/var/lib/docker/overlay2/7fae3f5b1e40a6b2186a50fe2f1118e775a7b5398f336a975aead4c9e584558f/diff:/var/lib/docker/overlay2/c5eb8740d6ddbf9661674f0780b3e350044c1b324021ddb90c6855692c4ab640/diff:/var/lib/docker/overlay2/276ff0
e489fcbdc52a5f1fcc3be2f2326e94a4982e453b724a337d50bf52eb3d/diff:/var/lib/docker/overlay2/d6aef1152a663bc7d5f067f5fe4b24044b4a02c5347c43caff2575cd589113b8/diff:/var/lib/docker/overlay2/a2452c33f209b77b71bfc37303741ebf457fd51f564cb5a037a8424a8ce25c07/diff:/var/lib/docker/overlay2/9e744ad586c9ac75bdce5f524a563cfcc44d64ecf745e00faa6d3b68ce2d6f26/diff:/var/lib/docker/overlay2/63c5ca51c4edca5a8b3540c1d94140a0405a1f93884c2e724b2eea0cc1a13427/diff:/var/lib/docker/overlay2/f058e08bfebb5a2e0de90a4c0776530ff390a31d67cc53b1a1a46cf564565612/diff:/var/lib/docker/overlay2/13949d47879ca0c2aa7b22b4a5d0724e077d3c986e8c855a8ece54b0ecb19ab6/diff:/var/lib/docker/overlay2/6f6b92a9cd1ae7141ef5e0f104fe78cb2306fd6a4ec3774a2814bc01b8a6eb63/diff:/var/lib/docker/overlay2/9637cafb8fb45b01bcc69e7fbe495dfedda118f48b3a7f59fe8c98fd940d2d00/diff:/var/lib/docker/overlay2/4b29feee1b31be18c7ef5fa25676f134aa0e4b773d02d5da7ee0475a7140e610/diff:/var/lib/docker/overlay2/05559122fed6de027ba73a4e1942bf9f548a08d3b34d0af8f0b86dcd6c42bea9/diff:/var/lib/d
ocker/overlay2/ce49d161daafc6ba4ecd8cdb6397ec63bffb59f2e18df9b749a66e5969c68070/diff:/var/lib/docker/overlay2/92528940c1b38142a7c444bab261669899de190d91344740f163c3fc8b034c94/diff:/var/lib/docker/overlay2/213d9aaca46bb96714b6c2b6834c9856068bf3b2808de7d72a5ef16c510cb88b/diff:/var/lib/docker/overlay2/f3b807d5e369ba4dc82a418bbb1a6a94a2d03648271201ec072a35789d6dce6c/diff:/var/lib/docker/overlay2/cdd5e2d059a4c7ea35a272fe0be016e91ababa31fea7fd92a5e439d6a71c8f5a/diff:/var/lib/docker/overlay2/81427a1d82997492e887fa4c5dc5efe18afea2e781415c09871343929a7e2bb4/diff:/var/lib/docker/overlay2/e69dff205635b2608858fa350c39ac2a4ade43e49c2d763d0ae4ac40e988fe64/diff:/var/lib/docker/overlay2/d94ce2070bfa0f1a4fc27f1c3f85432edb9d3cb6f29ebcd17b55568360312bf1/diff:/var/lib/docker/overlay2/34f42e7d85bb93a9320d94111a1381db9349da2a97103e2ad6d3ffaa4fda7dde/diff:/var/lib/docker/overlay2/422546d5dec87eb31c4bb28995c99ab55cfd8c2bc37d163f4d3b0671d7a839a3/diff:/var/lib/docker/overlay2/cd140b76bb4d6a5542198336ac5b3a2f3bfaeb635d87254dc77fb1ef350
93e76/diff:/var/lib/docker/overlay2/86226c3ca95e11447436ab52b705105aeaf86ee1d00ad33407928341087359fa/diff:/var/lib/docker/overlay2/6ef3509340435f9a8324c324f8b2044881dd1ebc80e42c4f825fe5dcd0731959/diff:/var/lib/docker/overlay2/aec95268413baa5f7ed43f0952ea22bf552b63f0512dad2ea36bd12c05bf3897/diff:/var/lib/docker/overlay2/3610e476078737e1a7323c8c7779b228cc3068d82c7dcc0b53728e3a228762a4/diff:/var/lib/docker/overlay2/2416c31436c0f817eb20213db328f2fffce16f7b5983ae8d9c520aea87ea6bfa/diff:/var/lib/docker/overlay2/a6e9860639b3c4314d23c278b859d82b4ed39abde781c1bed7b1062951c27ad7/diff:/var/lib/docker/overlay2/6556118214bbe4f83fbc99a3461679b72d9d7e467e01aca0a1c3a37ab8bbe801/diff:/var/lib/docker/overlay2/e46b69b82a00dd790bf30de0a046bfdd1e78e1dde33976d576232177a7eeee3e/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b53cfcddda9b6eb61be4cbe72d1aa85943035159636a3c4b3ebc31701f1e3a31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220531112729-2169",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220531112729-2169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220531112729-2169",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220531112729-2169",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220531112729-2169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dccacc59ed3bc8a780c7bd816f97ebe7d4b39df641369f3464d12f665ed8586e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dccacc59ed3b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220531112729-2169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4d4fcac3a251",
	                        "newest-cni-20220531112729-2169"
	                    ],
	                    "NetworkID": "147c62ffd7f8eb5bf4dc44f7cfdec6e219304c41f5644968d9079ed6e2aefb26",
	                    "EndpointID": "3627c087c717fc20bcf6601407ec30b2bee4abbc24d0b2103429ab43e9d3e21d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220531112729-2169 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220531112729-2169 logs -n 25: (5.22052683s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                            |                                                |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220531111208-2169                | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | embed-certs-20220531111208-2169                            |                                                |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220531111946-2169      | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:19 PDT |
	|         | disable-driver-mounts-20220531111946-2169                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:19 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:20 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220531110241-2169                        | old-k8s-version-20220531110241-2169            | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:20 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:26 PDT | 31 May 22 11:26 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531111947-2169             | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531111947-2169 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:27 PDT |
	|         | default-k8s-different-port-20220531111947-2169             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:27 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531112729-2169 --memory=2200            | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:28 PDT | 31 May 22 11:28 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | newest-cni-20220531112729-2169                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531112729-2169                             | newest-cni-20220531112729-2169                 | jenkins | v1.26.0-beta.1 | 31 May 22 11:29 PDT | 31 May 22 11:29 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 11:28:20
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 11:28:20.578170   14601 out.go:296] Setting OutFile to fd 1 ...
	I0531 11:28:20.578344   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578349   14601 out.go:309] Setting ErrFile to fd 2...
	I0531 11:28:20.578353   14601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 11:28:20.578450   14601 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 11:28:20.578728   14601 out.go:303] Setting JSON to false
	I0531 11:28:20.593905   14601 start.go:115] hostinfo: {"hostname":"37309.local","uptime":5269,"bootTime":1654016431,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 11:28:20.594000   14601 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 11:28:20.616053   14601 out.go:177] * [newest-cni-20220531112729-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 11:28:20.657488   14601 notify.go:193] Checking for updates...
	I0531 11:28:20.678853   14601 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 11:28:20.700919   14601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:20.721904   14601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 11:28:20.744090   14601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 11:28:20.766040   14601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 11:28:20.788411   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:20.789066   14601 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 11:28:20.860555   14601 docker.go:137] docker version: linux-20.10.14
	I0531 11:28:20.860683   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:20.986144   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:20.919865429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.008666   14601 out.go:177] * Using the docker driver based on existing profile
	I0531 11:28:21.030286   14601 start.go:284] selected driver: docker
	I0531 11:28:21.030310   14601 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.030459   14601 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 11:28:21.033857   14601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 11:28:21.157996   14601 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 18:28:21.093562365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 11:28:21.158200   14601 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 11:28:21.158219   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:21.158227   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:21.158238   14601 start_flags.go:306] config:
	{Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:21.201959   14601 out.go:177] * Starting control plane node newest-cni-20220531112729-2169 in cluster newest-cni-20220531112729-2169
	I0531 11:28:21.224000   14601 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 11:28:21.245702   14601 out.go:177] * Pulling base image ...
	I0531 11:28:21.287911   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:21.287945   14601 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 11:28:21.288004   14601 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 11:28:21.288030   14601 cache.go:57] Caching tarball of preloaded images
	I0531 11:28:21.288213   14601 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 11:28:21.288233   14601 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 11:28:21.289361   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.352247   14601 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 11:28:21.352265   14601 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 11:28:21.352276   14601 cache.go:206] Successfully downloaded all kic artifacts
	I0531 11:28:21.352353   14601 start.go:352] acquiring machines lock for newest-cni-20220531112729-2169: {Name:mk223b02c8d18fd8125fc1aec4677c6b6e6ebb27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 11:28:21.352426   14601 start.go:356] acquired machines lock for "newest-cni-20220531112729-2169" in 55.579µs
	I0531 11:28:21.352446   14601 start.go:94] Skipping create...Using existing machine configuration
	I0531 11:28:21.352452   14601 fix.go:55] fixHost starting: 
	I0531 11:28:21.352679   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.419750   14601 fix.go:103] recreateIfNeeded on newest-cni-20220531112729-2169: state=Stopped err=<nil>
	W0531 11:28:21.419776   14601 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 11:28:21.441735   14601 out.go:177] * Restarting existing docker container for "newest-cni-20220531112729-2169" ...
	I0531 11:28:21.463797   14601 cli_runner.go:164] Run: docker start newest-cni-20220531112729-2169
	I0531 11:28:21.814105   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:21.885658   14601 kic.go:416] container "newest-cni-20220531112729-2169" state is running.
	I0531 11:28:21.886206   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:21.959677   14601 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/config.json ...
	I0531 11:28:21.960076   14601 machine.go:88] provisioning docker machine ...
	I0531 11:28:21.960098   14601 ubuntu.go:169] provisioning hostname "newest-cni-20220531112729-2169"
	I0531 11:28:21.960175   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.032376   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.032565   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.032580   14601 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531112729-2169 && echo "newest-cni-20220531112729-2169" | sudo tee /etc/hostname
	I0531 11:28:22.157035   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531112729-2169
	
	I0531 11:28:22.157115   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.228089   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.228232   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.228247   14601 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531112729-2169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531112729-2169/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531112729-2169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 11:28:22.339894   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:22.339918   14601 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 11:28:22.339945   14601 ubuntu.go:177] setting up certificates
	I0531 11:28:22.339961   14601 provision.go:83] configureAuth start
	I0531 11:28:22.340025   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:22.411484   14601 provision.go:138] copyHostCerts
	I0531 11:28:22.411577   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 11:28:22.411587   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 11:28:22.411674   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 11:28:22.411878   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 11:28:22.411888   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 11:28:22.411944   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 11:28:22.412077   14601 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 11:28:22.412083   14601 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 11:28:22.412138   14601 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1675 bytes)
	I0531 11:28:22.412247   14601 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531112729-2169 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531112729-2169]
	I0531 11:28:22.494505   14601 provision.go:172] copyRemoteCerts
	I0531 11:28:22.494581   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 11:28:22.494633   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.566548   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:22.647934   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 11:28:22.667800   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 11:28:22.686536   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 11:28:22.706691   14601 provision.go:86] duration metric: configureAuth took 366.717286ms
	I0531 11:28:22.706707   14601 ubuntu.go:193] setting minikube options for container-runtime
	I0531 11:28:22.706872   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:22.706929   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.778451   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.778594   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.778608   14601 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 11:28:22.890950   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 11:28:22.890970   14601 ubuntu.go:71] root file system type: overlay
	I0531 11:28:22.891144   14601 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 11:28:22.891228   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:22.963222   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:22.963394   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:22.963443   14601 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 11:28:23.086363   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 11:28:23.086459   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.156610   14601 main.go:134] libmachine: Using SSH client type: native
	I0531 11:28:23.156758   14601 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55182 <nil> <nil>}
	I0531 11:28:23.156784   14601 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 11:28:23.275544   14601 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 11:28:23.275558   14601 machine.go:91] provisioned docker machine in 1.315489714s
	I0531 11:28:23.275564   14601 start.go:306] post-start starting for "newest-cni-20220531112729-2169" (driver="docker")
	I0531 11:28:23.275568   14601 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 11:28:23.275635   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 11:28:23.275687   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.345259   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.429063   14601 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 11:28:23.432446   14601 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 11:28:23.432461   14601 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 11:28:23.432468   14601 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 11:28:23.432475   14601 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 11:28:23.432482   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 11:28:23.432621   14601 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 11:28:23.432759   14601 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem -> 21692.pem in /etc/ssl/certs
	I0531 11:28:23.432905   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 11:28:23.439726   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:23.456662   14601 start.go:309] post-start completed in 181.091671ms
	I0531 11:28:23.456739   14601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 11:28:23.456786   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.526742   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.605692   14601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 11:28:23.610546   14601 fix.go:57] fixHost completed within 2.258116486s
	I0531 11:28:23.610566   14601 start.go:81] releasing machines lock for "newest-cni-20220531112729-2169", held for 2.258159111s
	I0531 11:28:23.610672   14601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531112729-2169
	I0531 11:28:23.680718   14601 ssh_runner.go:195] Run: systemctl --version
	I0531 11:28:23.680719   14601 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 11:28:23.680772   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.680795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:23.754229   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.757054   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:23.836240   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 11:28:23.968614   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:23.978398   14601 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 11:28:23.978455   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 11:28:23.987743   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 11:28:24.000522   14601 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 11:28:24.067960   14601 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 11:28:24.135864   14601 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 11:28:24.145523   14601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 11:28:24.212934   14601 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 11:28:24.222595   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.257967   14601 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 11:28:24.335762   14601 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 11:28:24.335888   14601 cli_runner.go:164] Run: docker exec -t newest-cni-20220531112729-2169 dig +short host.docker.internal
	I0531 11:28:24.460335   14601 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 11:28:24.460445   14601 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 11:28:24.464822   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.475293   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:24.566562   14601 out.go:177]   - kubelet.network-plugin=cni
	I0531 11:28:24.588831   14601 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 11:28:24.610772   14601 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 11:28:24.610916   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.642456   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.642471   14601 docker.go:541] Images already preloaded, skipping extraction
	I0531 11:28:24.642549   14601 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 11:28:24.671595   14601 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 11:28:24.671615   14601 cache_images.go:84] Images are preloaded, skipping loading
	I0531 11:28:24.671707   14601 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 11:28:24.745107   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:24.745118   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:24.745131   14601 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 11:28:24.745142   14601 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531112729-2169 NodeName:newest-cni-20220531112729-2169 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 11:28:24.745273   14601 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220531112729-2169"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 11:28:24.745338   14601 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531112729-2169 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 11:28:24.745395   14601 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 11:28:24.752959   14601 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 11:28:24.753032   14601 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 11:28:24.759894   14601 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0531 11:28:24.772209   14601 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 11:28:24.784449   14601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0531 11:28:24.796924   14601 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 11:28:24.800433   14601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 11:28:24.809821   14601 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169 for IP: 192.168.58.2
	I0531 11:28:24.809929   14601 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 11:28:24.810011   14601 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 11:28:24.810092   14601 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/client.key
	I0531 11:28:24.810156   14601 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key.cee25041
	I0531 11:28:24.810205   14601 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key
	I0531 11:28:24.810423   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem (1338 bytes)
	W0531 11:28:24.810461   14601 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169_empty.pem, impossibly tiny 0 bytes
	I0531 11:28:24.810473   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 11:28:24.810508   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 11:28:24.810539   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 11:28:24.810574   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1675 bytes)
	I0531 11:28:24.810635   14601 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem (1708 bytes)
	I0531 11:28:24.811155   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 11:28:24.827721   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 11:28:24.844468   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 11:28:24.861175   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531112729-2169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 11:28:24.878420   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 11:28:24.896393   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 11:28:24.913732   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 11:28:24.930651   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 11:28:24.947273   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/2169.pem --> /usr/share/ca-certificates/2169.pem (1338 bytes)
	I0531 11:28:24.963888   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/21692.pem --> /usr/share/ca-certificates/21692.pem (1708 bytes)
	I0531 11:28:24.980969   14601 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 11:28:24.998182   14601 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 11:28:25.010512   14601 ssh_runner.go:195] Run: openssl version
	I0531 11:28:25.015678   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2169.pem && ln -fs /usr/share/ca-certificates/2169.pem /etc/ssl/certs/2169.pem"
	I0531 11:28:25.023240   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026950   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:16 /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.026984   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2169.pem
	I0531 11:28:25.032002   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2169.pem /etc/ssl/certs/51391683.0"
	I0531 11:28:25.039256   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21692.pem && ln -fs /usr/share/ca-certificates/21692.pem /etc/ssl/certs/21692.pem"
	I0531 11:28:25.046930   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050635   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:16 /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.050678   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21692.pem
	I0531 11:28:25.055739   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21692.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 11:28:25.062867   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 11:28:25.070401   14601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074092   14601 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.074134   14601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 11:28:25.079290   14601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 11:28:25.086508   14601 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531112729-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531112729-2169 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 11:28:25.086608   14601 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:25.115424   14601 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 11:28:25.123088   14601 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 11:28:25.123106   14601 kubeadm.go:626] restartCluster start
	I0531 11:28:25.123166   14601 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 11:28:25.130286   14601 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.130356   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:25.201430   14601 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531112729-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:25.201614   14601 kubeconfig.go:127] "newest-cni-20220531112729-2169" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 11:28:25.202983   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:25.204253   14601 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 11:28:25.211900   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.211944   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.220060   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.422181   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.422379   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.433780   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.620174   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.620309   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.632389   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:25.820588   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:25.820750   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:25.831459   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.022339   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.022452   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.032812   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.222206   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.222338   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.233015   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.422196   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.422320   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.432962   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.620439   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.620525   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.629706   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:26.820697   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:26.820806   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:26.831264   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.022187   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.022343   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.032905   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.220251   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.220391   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.230734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.420963   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.421067   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.430383   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.621762   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.621857   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.632734   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:27.820676   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:27.820772   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:27.831490   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.022170   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.022312   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.033000   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.221452   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.221582   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.232311   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.232323   14601 api_server.go:165] Checking apiserver status ...
	I0531 11:28:28.232368   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 11:28:28.240337   14601 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.240351   14601 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 11:28:28.240362   14601 kubeadm.go:1092] stopping kube-system containers ...
	I0531 11:28:28.240419   14601 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 11:28:28.270566   14601 docker.go:442] Stopping containers: [b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5]
	I0531 11:28:28.270636   14601 ssh_runner.go:195] Run: docker stop b8f7cf8c7771 432c9954381c 7c076965981f 963a9454c026 22c69b053d31 85c82e0a3dfd 0f95a6838cd9 02136fcb6f2a 1968673ca085 f103292226f6 78ffb0ab7dc5 7685bdfe2259 c2c4289070e6 53615169312d b84f3422d4f3 9b9f23fa412f c5d361a450c5
	I0531 11:28:28.300361   14601 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 11:28:28.310648   14601 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 11:28:28.318184   14601 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 18:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 18:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 18:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 18:27 /etc/kubernetes/scheduler.conf
	
	I0531 11:28:28.318233   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 11:28:28.325358   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 11:28:28.332511   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.339617   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.339668   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 11:28:28.346531   14601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 11:28:28.353553   14601 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 11:28:28.353596   14601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 11:28:28.360560   14601 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367876   14601 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 11:28:28.367886   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:28.411595   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.400253   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.531124   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.579235   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:29.634030   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:29.634095   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.148677   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.646607   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:30.665446   14601 api_server.go:71] duration metric: took 1.031430401s to wait for apiserver process to appear ...
	I0531 11:28:30.665473   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:30.665491   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:30.666674   14601 api_server.go:256] stopped: https://127.0.0.1:55181/healthz: Get "https://127.0.0.1:55181/healthz": EOF
	I0531 11:28:31.168738   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.658657   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.658673   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:33.667080   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:33.676399   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 11:28:33.676422   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 11:28:34.166979   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.174133   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.174146   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:34.667060   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:34.672889   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 11:28:34.672907   14601 api_server.go:102] status: https://127.0.0.1:55181/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 11:28:35.166970   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.172997   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.179538   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.179550   14601 api_server.go:130] duration metric: took 4.514120757s to wait for apiserver health ...
	I0531 11:28:35.179559   14601 cni.go:95] Creating CNI manager for ""
	I0531 11:28:35.179568   14601 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 11:28:35.179579   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.186211   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.186226   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.186231   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.186234   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.186238   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running
	I0531 11:28:35.186244   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.186249   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.186256   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.186260   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.186263   14601 system_pods.go:74] duration metric: took 6.680457ms to wait for pod list to return data ...
	I0531 11:28:35.186268   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.188933   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.188950   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.188962   14601 node_conditions.go:105] duration metric: took 2.690302ms to run NodePressure ...
	I0531 11:28:35.188973   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 11:28:35.352632   14601 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 11:28:35.361125   14601 ops.go:34] apiserver oom_adj: -16
	I0531 11:28:35.361143   14601 kubeadm.go:630] restartCluster took 10.238154537s
	I0531 11:28:35.361151   14601 kubeadm.go:397] StartCluster complete in 10.274772238s
	I0531 11:28:35.361170   14601 settings.go:142] acquiring lock: {Name:mkc17c35ebad7086bc70ce4ee00847f82178f01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.361244   14601 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 11:28:35.361875   14601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk6c70fe678645646629da03168e2152bf3af7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 11:28:35.364955   14601 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531112729-2169" rescaled to 1
	I0531 11:28:35.364987   14601 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 11:28:35.441880   14601 out.go:177] * Verifying Kubernetes components...
	I0531 11:28:35.365003   14601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 11:28:35.365025   14601 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 11:28:35.365144   14601 config.go:178] Loaded profile config "newest-cni-20220531112729-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 11:28:35.442135   14601 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.442144   14601 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479754   14601 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479769   14601 addons.go:165] addon metrics-server should already be in state true
	I0531 11:28:35.479783   14601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 11:28:35.479760   14601 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531112729-2169"
	W0531 11:28:35.479821   14601 addons.go:165] addon dashboard should already be in state true
	I0531 11:28:35.479822   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442127   14601 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531112729-2169"
	I0531 11:28:35.479852   14601 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.479855   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.442146   14601 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531112729-2169"
	W0531 11:28:35.479865   14601 addons.go:165] addon storage-provisioner should already be in state true
	I0531 11:28:35.479883   14601 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531112729-2169"
	I0531 11:28:35.479911   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.480183   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480218   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.480300   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.481040   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.525057   14601 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 11:28:35.525155   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.676584   14601 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.624235   14601 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531112729-2169"
	I0531 11:28:35.639696   14601 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 11:28:35.713827   14601 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0531 11:28:35.676634   14601 addons.go:165] addon default-storageclass should already be in state true
	I0531 11:28:35.731542   14601 api_server.go:51] waiting for apiserver process to appear ...
	I0531 11:28:35.751936   14601 host.go:66] Checking if "newest-cni-20220531112729-2169" exists ...
	I0531 11:28:35.752094   14601 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:35.811040   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 11:28:35.849003   14601 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 11:28:35.811081   14601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 11:28:35.811141   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 11:28:35.811765   14601 cli_runner.go:164] Run: docker container inspect newest-cni-20220531112729-2169 --format={{.State.Status}}
	I0531 11:28:35.849157   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886646   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 11:28:35.886796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 11:28:35.886795   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.886823   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 11:28:35.886947   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:35.907168   14601 api_server.go:71] duration metric: took 542.159545ms to wait for apiserver process to appear ...
	I0531 11:28:35.907219   14601 api_server.go:87] waiting for apiserver healthz status ...
	I0531 11:28:35.907267   14601 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55181/healthz ...
	I0531 11:28:35.920781   14601 api_server.go:266] https://127.0.0.1:55181/healthz returned 200:
	ok
	I0531 11:28:35.923207   14601 api_server.go:140] control plane version: v1.23.6
	I0531 11:28:35.923240   14601 api_server.go:130] duration metric: took 16.012254ms to wait for apiserver health ...
	I0531 11:28:35.923248   14601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 11:28:35.933658   14601 system_pods.go:59] 8 kube-system pods found
	I0531 11:28:35.933689   14601 system_pods.go:61] "coredns-64897985d-m9wpk" [6f096a6e-7731-47f7-b98e-6eedbbd5b841] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 11:28:35.933698   14601 system_pods.go:61] "etcd-newest-cni-20220531112729-2169" [a5bfba25-ff48-42e0-9142-b085b624ec85] Running
	I0531 11:28:35.933710   14601 system_pods.go:61] "kube-apiserver-newest-cni-20220531112729-2169" [c890673a-c33b-4b7e-a6dd-241265cbe97e] Running
	I0531 11:28:35.933728   14601 system_pods.go:61] "kube-controller-manager-newest-cni-20220531112729-2169" [f085c574-4e96-49d9-b05a-9ae7e77756a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 11:28:35.933736   14601 system_pods.go:61] "kube-proxy-rml7v" [2a4877b2-6059-4ed5-b39a-d3aa0e50175a] Running
	I0531 11:28:35.933747   14601 system_pods.go:61] "kube-scheduler-newest-cni-20220531112729-2169" [13285495-f320-4400-a06d-5aa124a9f708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 11:28:35.933759   14601 system_pods.go:61] "metrics-server-b955d9d8-4nh24" [d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 11:28:35.933779   14601 system_pods.go:61] "storage-provisioner" [dfa38144-a068-4404-9087-254b825409e4] Running
	I0531 11:28:35.933786   14601 system_pods.go:74] duration metric: took 10.533198ms to wait for pod list to return data ...
	I0531 11:28:35.933792   14601 default_sa.go:34] waiting for default service account to be created ...
	I0531 11:28:35.938145   14601 default_sa.go:45] found service account: "default"
	I0531 11:28:35.938165   14601 default_sa.go:55] duration metric: took 4.366593ms for default service account to be created ...
	I0531 11:28:35.938197   14601 kubeadm.go:572] duration metric: took 573.199171ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 11:28:35.938223   14601 node_conditions.go:102] verifying NodePressure condition ...
	I0531 11:28:35.942426   14601 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0531 11:28:35.942450   14601 node_conditions.go:123] node cpu capacity is 6
	I0531 11:28:35.942465   14601 node_conditions.go:105] duration metric: took 4.236351ms to run NodePressure ...
	I0531 11:28:35.942485   14601 start.go:213] waiting for startup goroutines ...
	I0531 11:28:36.012965   14601 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.012980   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 11:28:36.013037   14601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531112729-2169
	I0531 11:28:36.013049   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.013580   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.015074   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.092243   14601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531112729-2169/id_rsa Username:docker}
	I0531 11:28:36.148273   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 11:28:36.245789   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 11:28:36.245817   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 11:28:36.247745   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 11:28:36.247758   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 11:28:36.345201   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 11:28:36.345217   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 11:28:36.345894   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 11:28:36.348010   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 11:28:36.348023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 11:28:36.433009   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 11:28:36.433023   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 11:28:36.436199   14601 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.436215   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 11:28:36.458750   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 11:28:36.458764   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 11:28:36.460817   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 11:28:36.555796   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 11:28:36.555811   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 11:28:36.660576   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 11:28:36.660591   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 11:28:36.746397   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 11:28:36.746413   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 11:28:36.762642   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 11:28:36.762659   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 11:28:36.779687   14601 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:36.779700   14601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 11:28:36.851105   14601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 11:28:37.356022   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.207732118s)
	I0531 11:28:37.356099   14601 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010190378s)
	I0531 11:28:37.447297   14601 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531112729-2169"
	I0531 11:28:37.650818   14601 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 11:28:37.709272   14601 addons.go:417] enableAddons completed in 2.34427737s
	I0531 11:28:37.742397   14601 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0531 11:28:37.763847   14601 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531112729-2169" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 18:28:21 UTC, end at Tue 2022-05-31 18:29:24 UTC. --
	May 31 18:28:39 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:28:39.541001799Z" level=error msg="Handler for GET /v1.41/containers/7a47e564e7ac08813dcecfa1b75dde332695bfd626e0f2c84938334ce5236a5e/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	May 31 18:28:39 newest-cni-20220531112729-2169 dockerd[130]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	May 31 18:29:15 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:15.664821593Z" level=info msg="ignoring event" container=7a47e564e7ac08813dcecfa1b75dde332695bfd626e0f2c84938334ce5236a5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:16 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:16.275188592Z" level=info msg="ignoring event" container=3c06ca5cd9628b379e317f8b85557b21f22e6dba99c59d0ddf80c21054643ed2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:16 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:16.276797508Z" level=info msg="ignoring event" container=acfd20649c72c2c188ef6dd8c75040a64d1dd89976699032b0db83261e520a1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:17 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:17.094828720Z" level=info msg="ignoring event" container=42193d4bf25a9c98452119ae1287211a8b1c2af714d0fb20a4f7ff3aa2148ed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:17 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:17.304225511Z" level=info msg="ignoring event" container=cad8a77063295496f65d09d8cfcf17864785caef2528bd1d1eeeea7e88fdb308 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:17 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:17.552000296Z" level=info msg="ignoring event" container=af914aa0332467afaba4addb7aa5b249876571bca6ac56095e2399d71cdfe6b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:18 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:18.505424593Z" level=info msg="ignoring event" container=f793bd71970e37102fabd6603869afdab5b6f6e4fd70f81a28a9a740beb33411 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:18 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:18.509460118Z" level=info msg="ignoring event" container=d9a6428e8c63b75ea5344abfb03c0be4114fe7a6d8240c7bb7fdb6e3611fe4d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:18 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:18.509491217Z" level=info msg="ignoring event" container=09074ce331644d1a11bbae43393da1a4627df33fc36576a5ade7bfa17607e3ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:18 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:18.511264204Z" level=info msg="ignoring event" container=9532d06b528ca7839562ebfea5795d5ae9d8b980a8e54db4ccf0314cae5582b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:19 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:19.435306786Z" level=info msg="ignoring event" container=51f5a053d11cbd43a48ead164cd155c40eca681f0a11285ddf7201ada268c045 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:19 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:19.502409996Z" level=info msg="ignoring event" container=2b27a2dceb7c973433850340b70eed4cc23b628e66feb4c93b20008a2ac88552 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:19 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:19.506311154Z" level=info msg="ignoring event" container=91274f2b50d3d8bb215fe5c2a47ac98d915881e8d44c1a41b787a391715c0100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:21 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:21.313766710Z" level=info msg="ignoring event" container=acd4b8f80c6ccfcbbea012ddacfd0824f70c7e035c268930cab8027151d71072 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:21 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:21.623370071Z" level=info msg="ignoring event" container=7434be3102bcb70feef42794417050c9847aa40b71c60fd7711c490bd294c32e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:21 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:21.708859193Z" level=info msg="ignoring event" container=d26ac3e067163736ee6a590d712ceb0193253a509ae7b7e8f20c6f2dcb6f89e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:21 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:21.880304585Z" level=info msg="ignoring event" container=ecb5975912cd3cf156bcc0c4d765fa3ebb879d0bfb76cc5bacf306a59b1b842c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:22.296956131Z" level=info msg="ignoring event" container=49d4360d2cb7bcf76bd010c90a237e4652aadf69b7232e0369ea2c56c9e93a36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:22 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:22.325920835Z" level=info msg="ignoring event" container=9f915f9a4a2735a9c0fd34f5e6a3cc0a11d0693e9337cf3e594727013d9b2b2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:23 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:23.621072423Z" level=info msg="ignoring event" container=16a119db063eee97327d0a229545c1e0e6d327416365de52a02f78570b4b8436 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:23 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:23.631141244Z" level=info msg="ignoring event" container=1352516cc432293dbc01f3aee480c992f1edb36de8d59fc2f41023b1213ca144 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:23 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:23.727821354Z" level=info msg="ignoring event" container=6c9c772e915973bbc815e7e04b630386b40a1005fe9c17ed9619a04346524df1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 18:29:23 newest-cni-20220531112729-2169 dockerd[130]: time="2022-05-31T18:29:23.732232850Z" level=info msg="ignoring event" container=e4eb9b7d416c0fc018e1a2e0e986b1dab6af6b81f9e6cf4464452cdd86472b49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3fd4cdf3a5fd7       6e38f40d628db       48 seconds ago       Running             storage-provisioner       1                   2c0de734fef36
	93f13d37d4785       4c03754524064       49 seconds ago       Running             kube-proxy                1                   8fa0dcd0b8612
	77a22cc95c9a9       25f8c7f3da61c       54 seconds ago       Running             etcd                      1                   84ceeb1af20d5
	962b034301273       595f327f224a4       54 seconds ago       Running             kube-scheduler            1                   4f032b43a8599
	531701230de4c       df7b72818ad2e       54 seconds ago       Running             kube-controller-manager   1                   08b8f59a5f4ba
	9564d7e881212       8fa62c12256df       54 seconds ago       Running             kube-apiserver            1                   77ad11efde503
	7c076965981f7       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   963a9454c0262
	0f95a6838cd9a       4c03754524064       About a minute ago   Exited              kube-proxy                0                   02136fcb6f2a8
	f103292226f66       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   b84f3422d4f34
	78ffb0ab7dc51       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   9b9f23fa412f9
	7685bdfe22590       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   c5d361a450c54
	c2c4289070e65       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   53615169312d5
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220531112729-2169
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220531112729-2169
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=newest-cni-20220531112729-2169
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T11_27_52_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:27:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220531112729-2169
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:29:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:29:13 +0000   Tue, 31 May 2022 18:29:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220531112729-2169
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                270d6860-d295-4a39-8bcb-83c3e922fb10
	  Boot ID:                    b115650d-30b9-46ea-a569-e51afa147d01
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-m9wpk                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     79s
	  kube-system                 etcd-newest-cni-20220531112729-2169                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-newest-cni-20220531112729-2169             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-newest-cni-20220531112729-2169    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-rml7v                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-newest-cni-20220531112729-2169             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 metrics-server-b955d9d8-4nh24                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-r6z52                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-8v2px                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 48s                kube-proxy  
	  Normal  Starting                 79s                kube-proxy  
	  Normal  NodeHasSufficientMemory  92s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 92s                kubelet     Starting kubelet.
	  Normal  NodeReady                82s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeReady
	  Normal  Starting                 55s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x7 over 55s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x7 over 55s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  55s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)  kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  Starting                 11s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                kubelet     Node newest-cni-20220531112729-2169 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [77a22cc95c9a] <==
	* {"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531112729-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:28:32.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:28:32.268Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:28:32.270Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-05-31T18:29:18.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"170.903454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-b955d9d8-4nh24\" ","response":"range_response_count:1 size:3875"}
	{"level":"info","ts":"2022-05-31T18:29:18.968Z","caller":"traceutil/trace.go:171","msg":"trace[966545425] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-b955d9d8-4nh24; range_end:; response_count:1; response_revision:638; }","duration":"171.113084ms","start":"2022-05-31T18:29:18.797Z","end":"2022-05-31T18:29:18.968Z","steps":["trace[966545425] 'agreement among raft nodes before linearized reading'  (duration: 58.292372ms)","trace[966545425] 'range keys from in-memory index tree'  (duration: 112.55969ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T18:29:19.899Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.854168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20220531112729-2169\" ","response":"range_response_count:1 size:4544"}
	{"level":"info","ts":"2022-05-31T18:29:19.900Z","caller":"traceutil/trace.go:171","msg":"trace[714908729] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-20220531112729-2169; range_end:; response_count:1; response_revision:642; }","duration":"119.919345ms","start":"2022-05-31T18:29:19.780Z","end":"2022-05-31T18:29:19.900Z","steps":["trace[714908729] 'agreement among raft nodes before linearized reading'  (duration: 28.161707ms)","trace[714908729] 'range keys from in-memory index tree'  (duration: 91.655434ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T18:29:19.900Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.410619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52.16f44406748248b4\" ","response":"range_response_count:1 size:788"}
	{"level":"info","ts":"2022-05-31T18:29:19.900Z","caller":"traceutil/trace.go:171","msg":"trace[788296307] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52.16f44406748248b4; range_end:; response_count:1; response_revision:642; }","duration":"123.471163ms","start":"2022-05-31T18:29:19.776Z","end":"2022-05-31T18:29:19.900Z","steps":["trace[788296307] 'agreement among raft nodes before linearized reading'  (duration: 31.546609ms)","trace[788296307] 'range keys from in-memory index tree'  (duration: 91.837957ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T18:29:20.083Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.32921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-b955d9d8-4nh24.16f44405f9d4dbab\" ","response":"range_response_count:1 size:722"}
	{"level":"info","ts":"2022-05-31T18:29:20.084Z","caller":"traceutil/trace.go:171","msg":"trace[310971805] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-b955d9d8-4nh24.16f44405f9d4dbab; range_end:; response_count:1; response_revision:644; }","duration":"118.694872ms","start":"2022-05-31T18:29:19.965Z","end":"2022-05-31T18:29:20.084Z","steps":["trace[310971805] 'agreement among raft nodes before linearized reading'  (duration: 36.237423ms)","trace[310971805] 'range keys from in-memory index tree'  (duration: 82.055119ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T18:29:22.893Z","caller":"traceutil/trace.go:171","msg":"trace[559694998] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:708; }","duration":"112.907322ms","start":"2022-05-31T18:29:22.780Z","end":"2022-05-31T18:29:22.893Z","steps":["trace[559694998] 'read index received'  (duration: 112.898492ms)","trace[559694998] 'applied index is now lower than readState.Index'  (duration: 7.82µs)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T18:29:22.893Z","caller":"traceutil/trace.go:171","msg":"trace[1536279775] transaction","detail":"{read_only:false; response_revision:665; number_of_response:1; }","duration":"111.268643ms","start":"2022-05-31T18:29:22.781Z","end":"2022-05-31T18:29:22.893Z","steps":["trace[1536279775] 'process raft request'  (duration: 111.178525ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T18:29:22.893Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.078443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-b955d9d8-4nh24.16f44405f9d4dbab\" ","response":"range_response_count:1 size:722"}
	{"level":"info","ts":"2022-05-31T18:29:22.893Z","caller":"traceutil/trace.go:171","msg":"trace[874569328] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-b955d9d8-4nh24.16f44405f9d4dbab; range_end:; response_count:1; response_revision:664; }","duration":"113.355269ms","start":"2022-05-31T18:29:22.780Z","end":"2022-05-31T18:29:22.893Z","steps":["trace[874569328] 'agreement among raft nodes before linearized reading'  (duration: 113.043828ms)"],"step_count":1}
	
	* 
	* ==> etcd [f103292226f6] <==
	* {"level":"info","ts":"2022-05-31T18:27:46.285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531112729-2169 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:27:46.286Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:27:46.287Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:27:46.287Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:27:46.288Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:28:07.726Z","caller":"traceutil/trace.go:171","msg":"trace[829804833] linearizableReadLoop","detail":"{readStateIndex:514; appliedIndex:514; }","duration":"187.328642ms","start":"2022-05-31T18:28:07.539Z","end":"2022-05-31T18:28:07.726Z","steps":["trace[829804833] 'read index received'  (duration: 187.322273ms)","trace[829804833] 'applied index is now lower than readState.Index'  (duration: 5.364µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T18:28:07.729Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"190.525408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cssvj\" ","response":"range_response_count:1 size:4348"}
	{"level":"info","ts":"2022-05-31T18:28:07.729Z","caller":"traceutil/trace.go:171","msg":"trace[68716722] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cssvj; range_end:; response_count:1; response_revision:500; }","duration":"190.716622ms","start":"2022-05-31T18:28:07.539Z","end":"2022-05-31T18:28:07.729Z","steps":["trace[68716722] 'agreement among raft nodes before linearized reading'  (duration: 187.461455ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T18:28:07.729Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.056009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2022-05-31T18:28:07.730Z","caller":"traceutil/trace.go:171","msg":"trace[954045274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:500; }","duration":"165.325284ms","start":"2022-05-31T18:28:07.564Z","end":"2022-05-31T18:28:07.729Z","steps":["trace[954045274] 'agreement among raft nodes before linearized reading'  (duration: 162.105561ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:28:08.091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-31T18:28:08.091Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220531112729-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/05/31 18:28:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/31 18:28:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-31T18:28:08.098Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-05-31T18:28:08.099Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:08.100Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:28:08.100Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220531112729-2169","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:29:25 up  1:17,  0 users,  load average: 2.07, 1.20, 1.10
	Linux newest-cni-20220531112729-2169 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9564d7e88121] <==
	* I0531 18:28:33.753074       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:28:33.754281       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 18:28:33.755371       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:28:33.755901       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:28:33.770198       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 18:28:33.776979       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:28:34.653297       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:28:34.653363       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:28:34.656797       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0531 18:28:34.780506       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:28:34.780653       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:28:34.780671       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:28:35.274633       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:28:35.281023       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:28:35.311609       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:28:35.352482       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:28:35.357506       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:28:35.929988       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:28:37.446106       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 18:28:37.570738       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.221.14]
	I0531 18:28:37.580320       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.157.141]
	I0531 18:29:12.633765       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:29:13.775999       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:29:14.074709       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [c2c4289070e6] <==
	* W0531 18:28:09.096294       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096316       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096319       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096338       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096351       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096399       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096418       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096451       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096479       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096492       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096521       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096541       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096542       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096547       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096574       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096577       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096577       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096580       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096592       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096604       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096612       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096611       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096617       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0531 18:28:09.096659       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [531701230de4] <==
	* I0531 18:29:13.728386       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 18:29:13.742376       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0531 18:29:13.749471       1 shared_informer.go:247] Caches are synced for taint 
	I0531 18:29:13.749557       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0531 18:29:13.749574       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 18:29:13.749621       1 event.go:294] "Event occurred" object="newest-cni-20220531112729-2169" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220531112729-2169 event: Registered Node newest-cni-20220531112729-2169 in Controller"
	W0531 18:29:13.749604       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220531112729-2169. Assuming now as a timestamp.
	I0531 18:29:13.749660       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 18:29:13.749772       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 18:29:13.771177       1 shared_informer.go:247] Caches are synced for TTL 
	I0531 18:29:13.777515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0531 18:29:13.778576       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0531 18:29:13.822879       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 18:29:13.825297       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:29:13.832435       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:29:13.834952       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 18:29:13.845936       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 18:29:13.847074       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 18:29:13.857359       1 shared_informer.go:247] Caches are synced for expand 
	I0531 18:29:13.873526       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 18:29:14.129146       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-8v2px"
	I0531 18:29:14.133358       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-r6z52"
	I0531 18:29:14.240593       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:29:14.240628       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:29:14.243130       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [7685bdfe2259] <==
	* I0531 18:28:04.273542       1 shared_informer.go:247] Caches are synced for PV protection 
	I0531 18:28:04.285172       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:28:04.298828       1 shared_informer.go:247] Caches are synced for taint 
	I0531 18:28:04.299004       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 18:28:04.299054       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220531112729-2169. Assuming now as a timestamp.
	I0531 18:28:04.299092       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 18:28:04.299292       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 18:28:04.299382       1 event.go:294] "Event occurred" object="newest-cni-20220531112729-2169" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220531112729-2169 event: Registered Node newest-cni-20220531112729-2169 in Controller"
	I0531 18:28:04.321145       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 18:28:04.327204       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:28:04.370708       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 18:28:04.370776       1 disruption.go:371] Sending events to api server.
	I0531 18:28:04.744877       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:28:04.794284       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:28:04.794345       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:28:04.878999       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rml7v"
	I0531 18:28:05.027228       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 18:28:05.113311       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 18:28:05.125701       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cssvj"
	I0531 18:28:05.129589       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-m9wpk"
	I0531 18:28:05.142073       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-cssvj"
	I0531 18:28:07.285742       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:28:07.289244       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0531 18:28:07.293150       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0531 18:28:07.298593       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-4nh24"
	
	* 
	* ==> kube-proxy [0f95a6838cd9] <==
	* I0531 18:28:05.480309       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:28:05.480375       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:28:05.480416       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:28:05.503214       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:28:05.503272       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:28:05.503280       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:28:05.503295       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:28:05.503750       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:28:05.504660       1 config.go:317] "Starting service config controller"
	I0531 18:28:05.504735       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:28:05.504770       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:28:05.504776       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:28:05.605592       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:28:05.605621       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [93f13d37d478] <==
	* I0531 18:28:35.669525       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:28:35.669579       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:28:35.669600       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:28:35.917077       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:28:35.917238       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:28:35.917486       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:28:35.917544       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:28:35.921268       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:28:35.922561       1 config.go:317] "Starting service config controller"
	I0531 18:28:35.922606       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:28:35.922628       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:28:35.922631       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:28:36.038270       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:28:36.038340       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [78ffb0ab7dc5] <==
	* W0531 18:27:49.193282       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:27:49.194233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:27:49.193268       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:27:49.194245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:27:49.194070       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:27:49.194231       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:27:49.194252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:27:49.194327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:27:49.195317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:27:49.195359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:27:50.076759       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:27:50.076815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:27:50.148554       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:27:50.148608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:27:50.233409       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:27:50.233425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:27:50.265479       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:27:50.265521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:27:50.307502       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:27:50.307545       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0531 18:27:50.690212       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0531 18:27:51.479213       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 18:28:08.092864       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0531 18:28:08.092960       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 18:28:08.094754       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [962b03430127] <==
	* W0531 18:28:30.747455       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0531 18:28:31.549409       1 serving.go:348] Generated self-signed cert in-memory
	W0531 18:28:33.692901       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 18:28:33.692995       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:28:33.693015       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 18:28:33.693027       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 18:28:33.746557       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 18:28:33.747771       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 18:28:33.747859       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 18:28:33.747887       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:28:33.748369       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 18:28:33.848573       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:28:21 UTC, end at Tue 2022-05-31 18:29:27 UTC. --
	May 31 18:29:26 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:26.743482    3689 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podSandboxID={Type:docker ID:0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9} podNetnsPath="/proc/8902/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:26 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:26.781086    3689 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.42 -j CNI-8cea33e87444daaf97fc6118 -m comment --comment name: \"crio\" id: \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8cea33e87444daaf97fc6118':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podSandboxID={Type:docker ID:0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9} podNetnsPath="/proc/8902/ns/net" networkType="bridge" networkName="crio"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.025628    3689 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.41 -j CNI-448f3919531988d5c331db46 -m comment --comment name: \"crio\" id: \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-448f3919531988d5c331db46':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.025693    3689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.41 -j CNI-448f3919531988d5c331db46 -m comment --comment name: \"crio\" id: \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-448f3919531988d5c331db46':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-4nh24"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.025716    3689 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" network for pod \"metrics-server-b955d9d8-4nh24\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4nh24_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.41 -j CNI-448f3919531988d5c331db46 -m comment --comment name: \"crio\" id: \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-448f3919531988d5c331db46':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-4nh24"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.025762    3689 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-4nh24_kube-system(d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-4nh24_kube-system(d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\\\" network for pod \\\"metrics-server-b955d9d8-4nh24\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-4nh24_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\\\" network for pod \\\"metrics-server-b955d9d8-4nh24\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-4nh24_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.41 -j CNI-448f3919531988d5c331db46 -m comment --comment name: \\\"crio\\\" id: \\\"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-448f3919531988d5c331db46':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-4nh24" podUID=d5f2f3dc-56d4-4fa5-98a9-4f49dd8865d5
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.026250    3689 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.42 -j CNI-8cea33e87444daaf97fc6118 -m comment --comment name: \"crio\" id: \"0b21bda56f564c3f1cc8f2a
0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8cea33e87444daaf97fc6118':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.026305    3689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.42 -j CNI-8cea33e87444daaf97fc6118 -m comment --comment name: \"crio\" id: \"0b21bda56f564c3f1cc8f2a0a7bd
5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8cea33e87444daaf97fc6118':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.026328    3689 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" network for pod \"dashboard-metrics-scraper-56974995fc-r6z52\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.42 -j CNI-8cea33e87444daaf97fc6118 -m comment --comment name: \"crio\" id: \"0b21bda56f564c3f1cc8f2a0a7bd
5f4c232cd20bb208cfa3929f3a41c1f2c8d9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8cea33e87444daaf97fc6118':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.026373    3689 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard(5d039dd7-c288-4ae2-aaca-313ecf1c364f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard(5d039dd7-c288-4ae2-aaca-313ecf1c364f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-r6z52\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-r6z52\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.42 -j CNI-8cea33e87444daaf97fc6118 -m comment --comment name: \\\"crio\\\" id: \\\"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8cea33e87444daaf97fc6118':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-r6z52" podUID=5d039dd7-c288-4ae2-aaca-313ecf1c364f
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.029803    3689 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to set up pod \"coredns-64897985d-m9wpk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to teardown pod \"coredns-64897985d-m9wpk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-b5e489e78628abba5cd617ae -m comment --comment name: \"crio\" id: \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-b5e489e78628abba5cd617ae':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.029863    3689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to set up pod \"coredns-64897985d-m9wpk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to teardown pod \"coredns-64897985d-m9wpk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-b5e489e78628abba5cd617ae -m comment --comment name: \"crio\" id: \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-b5e489e78628abba5cd617ae':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-m9wpk"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.029886    3689 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to set up pod \"coredns-64897985d-m9wpk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" network for pod \"coredns-64897985d-m9wpk\": networkPlugin cni failed to teardown pod \"coredns-64897985d-m9wpk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-b5e489e78628abba5cd617ae -m comment --comment name: \"crio\" id: \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-b5e489e78628abba5cd617ae':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-m9wpk"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: E0531 18:29:27.029929    3689 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-m9wpk_kube-system(6f096a6e-7731-47f7-b98e-6eedbbd5b841)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-m9wpk_kube-system(6f096a6e-7731-47f7-b98e-6eedbbd5b841)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\\\" network for pod \\\"coredns-64897985d-m9wpk\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-m9wpk_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\\\" network for pod \\\"coredns-64897985d-m9wpk\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-m9wpk_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-b5e489e78628abba5cd617ae -m comment --comment name: \\\"crio\\\" id: \\\"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-b5e489e78628abba5cd617ae':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-m9wpk" podUID=6f096a6e-7731-47f7-b98e-6eedbbd5b841
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.035695    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-4nh24_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.040658    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"851193850e9851036d9c27a9d8529afe192d6a2cae3a49f08e0a74ff8097365c\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.042363    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-8469778f77-8v2px_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"21f0aa13aec0b19dff473650fb8e86b200353be587162048e668cb6b52aa6d2c\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.047717    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="21f0aa13aec0b19dff473650fb8e86b200353be587162048e668cb6b52aa6d2c"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.049320    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"21f0aa13aec0b19dff473650fb8e86b200353be587162048e668cb6b52aa6d2c\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.051177    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-r6z52_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.056882    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="eed0f4a5f1f8b2759ce2543efd82bfcddae9c26a0c4b5a00ae23ae3adf9d01f5"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.058609    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"0b21bda56f564c3f1cc8f2a0a7bd5f4c232cd20bb208cfa3929f3a41c1f2c8d9\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.059374    3689 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-m9wpk_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\""
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.067134    3689 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="68707dfd7e95e24e7634081dd611b3adbbe88b04b72a8936337b3dd324c9ffef"
	May 31 18:29:27 newest-cni-20220531112729-2169 kubelet[3689]: I0531 18:29:27.068862    3689 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"f02ee77e84e303bd461d2bed8547f47b91050fe41bb1628899add26c626b6a08\""
	
	* 
	* ==> storage-provisioner [3fd4cdf3a5fd] <==
	* I0531 18:28:36.549023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:28:36.562743       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:28:36.562793       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:29:12.635274       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:29:12.635459       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6!
	I0531 18:29:12.636610       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"634329ce-a558-49c1-b9d8-0b4e8eaaae7c", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6 became leader
	I0531 18:29:12.736575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_fa71ae38-7da7-47d1-84de-b1f1248566b6!
	
	* 
	* ==> storage-provisioner [7c076965981f] <==
	* I0531 18:28:07.032251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:28:07.040601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:28:07.040637       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:28:07.048317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:28:07.048451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c!
	I0531 18:28:07.048751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"634329ce-a558-49c1-b9d8-0b4e8eaaae7c", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c became leader
	I0531 18:28:07.148609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220531112729-2169_ad03480f-a450-496c-a547-ef901dc75c1c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220531112729-2169 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px: exit status 1 (193.546682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-m9wpk" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-4nh24" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-r6z52" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-8v2px" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220531112729-2169 describe pod coredns-64897985d-m9wpk metrics-server-b955d9d8-4nh24 dashboard-metrics-scraper-56974995fc-r6z52 kubernetes-dashboard-8469778f77-8v2px: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (49.94s)

                                                
                                    

Test pass (248/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.12
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.23.6/json-events 6.81
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.74
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.42
18 TestDownloadOnlyKic 7.08
19 TestBinaryMirror 1.68
20 TestOffline 51.59
22 TestAddons/Setup 87.93
26 TestAddons/parallel/MetricsServer 5.79
27 TestAddons/parallel/HelmTiller 13.36
29 TestAddons/parallel/CSI 41.6
31 TestAddons/serial/GCPAuth 14.34
32 TestAddons/StoppedEnableDisable 13.18
33 TestCertOptions 30.33
34 TestCertExpiration 215.3
35 TestDockerFlags 28.26
36 TestForceSystemdFlag 29.68
37 TestForceSystemdEnv 27.16
39 TestHyperKitDriverInstallOrUpdate 7.02
42 TestErrorSpam/setup 23.28
43 TestErrorSpam/start 2.18
44 TestErrorSpam/status 1.31
45 TestErrorSpam/pause 1.9
46 TestErrorSpam/unpause 1.95
47 TestErrorSpam/stop 13.12
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 41.43
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 6.56
54 TestFunctional/serial/KubeContext 0.03
55 TestFunctional/serial/KubectlGetPods 1.46
58 TestFunctional/serial/CacheCmd/cache/add_remote 4.9
59 TestFunctional/serial/CacheCmd/cache/add_local 1.83
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
61 TestFunctional/serial/CacheCmd/cache/list 0.07
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.51
63 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
64 TestFunctional/serial/CacheCmd/cache/delete 0.15
65 TestFunctional/serial/MinikubeKubectlCmd 0.5
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.63
67 TestFunctional/serial/ExtraConfig 29.6
68 TestFunctional/serial/ComponentHealth 0.05
69 TestFunctional/serial/LogsCmd 3.21
70 TestFunctional/serial/LogsFileCmd 3.19
72 TestFunctional/parallel/ConfigCmd 0.45
74 TestFunctional/parallel/DryRun 1.75
75 TestFunctional/parallel/InternationalLanguage 0.61
76 TestFunctional/parallel/StatusCmd 1.42
79 TestFunctional/parallel/ServiceCmd 14.25
81 TestFunctional/parallel/AddonsCmd 0.26
82 TestFunctional/parallel/PersistentVolumeClaim 25.37
84 TestFunctional/parallel/SSHCmd 0.97
85 TestFunctional/parallel/CpCmd 1.66
86 TestFunctional/parallel/MySQL 19.35
87 TestFunctional/parallel/FileSync 0.5
88 TestFunctional/parallel/CertSync 2.62
92 TestFunctional/parallel/NodeLabels 0.04
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
96 TestFunctional/parallel/Version/short 0.15
97 TestFunctional/parallel/Version/components 1.01
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
101 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
102 TestFunctional/parallel/ImageCommands/ImageBuild 3.91
103 TestFunctional/parallel/ImageCommands/Setup 2.25
104 TestFunctional/parallel/DockerEnv/bash 1.67
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.42
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.36
109 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.4
110 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.94
111 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.78
112 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
113 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.75
114 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.43
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
116 TestFunctional/parallel/ProfileCmd/profile_list 0.51
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.16
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/MountCmd/any-port 8.98
129 TestFunctional/parallel/MountCmd/specific-port 2.93
130 TestFunctional/delete_addon-resizer_images 0.16
131 TestFunctional/delete_my-image_image 0.07
132 TestFunctional/delete_minikube_cached_images 0.07
142 TestJSONOutput/start/Command 36.07
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.68
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.64
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 12.44
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.76
167 TestKicCustomNetwork/create_custom_network 25.71
168 TestKicCustomNetwork/use_default_bridge_network 26.2
169 TestKicExistingNetwork 27.19
170 TestKicCustomSubnet 26.3
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 54.96
175 TestMountStart/serial/StartWithMountFirst 6.88
176 TestMountStart/serial/VerifyMountFirst 0.42
177 TestMountStart/serial/StartWithMountSecond 7.12
178 TestMountStart/serial/VerifyMountSecond 0.44
179 TestMountStart/serial/DeleteFirst 2.39
180 TestMountStart/serial/VerifyMountPostDelete 0.42
181 TestMountStart/serial/Stop 1.62
182 TestMountStart/serial/RestartStopped 4.8
183 TestMountStart/serial/VerifyMountPostStop 0.42
186 TestMultiNode/serial/FreshStart2Nodes 70.41
187 TestMultiNode/serial/DeployApp2Nodes 5.24
188 TestMultiNode/serial/PingHostFrom2Pods 0.79
189 TestMultiNode/serial/AddNode 25.58
190 TestMultiNode/serial/ProfileList 0.51
191 TestMultiNode/serial/CopyFile 16.24
192 TestMultiNode/serial/StopNode 14.11
193 TestMultiNode/serial/StartAfterStop 25.18
194 TestMultiNode/serial/RestartKeepsNodes 119.16
195 TestMultiNode/serial/DeleteNode 18.87
196 TestMultiNode/serial/StopMultiNode 25.15
197 TestMultiNode/serial/RestartMultiNode 57.05
198 TestMultiNode/serial/ValidateNameConflict 26.89
204 TestScheduledStopUnix 97.8
205 TestSkaffold 55.84
207 TestInsufficientStorage 12.65
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.3
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.41
225 TestStoppedBinaryUpgrade/Setup 0.77
227 TestStoppedBinaryUpgrade/MinikubeLogs 3.7
229 TestPause/serial/Start 38.46
230 TestPause/serial/SecondStartNoReconfiguration 6.23
231 TestPause/serial/Pause 0.7
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
242 TestNoKubernetes/serial/StartWithK8s 26.61
243 TestNoKubernetes/serial/StartWithStopK8s 17.25
244 TestNoKubernetes/serial/Start 8.24
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
246 TestNoKubernetes/serial/ProfileList 1.08
247 TestNoKubernetes/serial/Stop 1.96
248 TestNetworkPlugins/group/auto/Start 42.13
249 TestNoKubernetes/serial/StartNoArgs 4.36
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
251 TestNetworkPlugins/group/kindnet/Start 47.36
252 TestNetworkPlugins/group/auto/KubeletFlags 0.44
253 TestNetworkPlugins/group/auto/NetCatPod 12.21
254 TestNetworkPlugins/group/auto/DNS 0.11
255 TestNetworkPlugins/group/auto/Localhost 0.11
256 TestNetworkPlugins/group/auto/HairPin 5.12
257 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
258 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
259 TestNetworkPlugins/group/kindnet/NetCatPod 13.76
260 TestNetworkPlugins/group/cilium/Start 69.21
261 TestNetworkPlugins/group/kindnet/DNS 0.12
262 TestNetworkPlugins/group/kindnet/Localhost 0.11
263 TestNetworkPlugins/group/kindnet/HairPin 0.12
264 TestNetworkPlugins/group/calico/Start 67.76
265 TestNetworkPlugins/group/cilium/ControllerPod 5.02
266 TestNetworkPlugins/group/cilium/KubeletFlags 0.47
267 TestNetworkPlugins/group/cilium/NetCatPod 11.34
268 TestNetworkPlugins/group/calico/ControllerPod 5.02
269 TestNetworkPlugins/group/cilium/DNS 0.12
270 TestNetworkPlugins/group/cilium/Localhost 0.11
271 TestNetworkPlugins/group/cilium/HairPin 0.11
272 TestNetworkPlugins/group/calico/KubeletFlags 0.46
273 TestNetworkPlugins/group/calico/NetCatPod 13.69
274 TestNetworkPlugins/group/false/Start 79.24
275 TestNetworkPlugins/group/calico/DNS 0.12
276 TestNetworkPlugins/group/calico/Localhost 0.1
277 TestNetworkPlugins/group/calico/HairPin 0.11
278 TestNetworkPlugins/group/bridge/Start 40.76
279 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
280 TestNetworkPlugins/group/bridge/NetCatPod 10.67
281 TestNetworkPlugins/group/bridge/DNS 0.13
282 TestNetworkPlugins/group/bridge/Localhost 0.12
283 TestNetworkPlugins/group/bridge/HairPin 0.1
284 TestNetworkPlugins/group/enable-default-cni/Start 39.48
285 TestNetworkPlugins/group/false/KubeletFlags 0.46
286 TestNetworkPlugins/group/false/NetCatPod 11.64
287 TestNetworkPlugins/group/false/DNS 0.12
288 TestNetworkPlugins/group/false/Localhost 0.11
289 TestNetworkPlugins/group/false/HairPin 5.11
290 TestNetworkPlugins/group/kubenet/Start 78.18
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.72
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
298 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
299 TestNetworkPlugins/group/kubenet/NetCatPod 11.65
300 TestNetworkPlugins/group/kubenet/DNS 0.12
301 TestNetworkPlugins/group/kubenet/Localhost 0.1
302 TestNetworkPlugins/group/kubenet/HairPin 0.1
304 TestStartStop/group/no-preload/serial/FirstStart 49.22
305 TestStartStop/group/no-preload/serial/DeployApp 10.69
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
307 TestStartStop/group/no-preload/serial/Stop 12.56
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
309 TestStartStop/group/no-preload/serial/SecondStart 358.22
312 TestStartStop/group/old-k8s-version/serial/Stop 1.63
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.62
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.47
320 TestStartStop/group/embed-certs/serial/FirstStart 38.86
321 TestStartStop/group/embed-certs/serial/DeployApp 9.74
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.69
323 TestStartStop/group/embed-certs/serial/Stop 12.58
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
325 TestStartStop/group/embed-certs/serial/SecondStart 332.34
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.6
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
332 TestStartStop/group/default-k8s-different-port/serial/FirstStart 40.35
333 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.68
334 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.71
335 TestStartStop/group/default-k8s-different-port/serial/Stop 12.53
336 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.32
337 TestStartStop/group/default-k8s-different-port/serial/SecondStart 333.49
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 9.01
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.58
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.47
344 TestStartStop/group/newest-cni/serial/FirstStart 37.14
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.73
347 TestStartStop/group/newest-cni/serial/Stop 12.73
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
349 TestStartStop/group/newest-cni/serial/SecondStart 17.8
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
x
+
TestDownloadOnly/v1.16.0/json-events (16.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220531101206-2169 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220531101206-2169 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.123232467s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220531101206-2169
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220531101206-2169: exit status 85 (312.126551ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 10:12:06
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 10:12:06.508795    2180 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:12:06.508995    2180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:12:06.509002    2180 out.go:309] Setting ErrFile to fd 2...
	I0531 10:12:06.509005    2180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:12:06.509105    2180 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	W0531 10:12:06.509205    2180 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: no such file or directory
	I0531 10:12:06.509667    2180 out.go:303] Setting JSON to true
	I0531 10:12:06.526456    2180 start.go:115] hostinfo: {"hostname":"37309.local","uptime":695,"bootTime":1654016431,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:12:06.526563    2180 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:12:06.549254    2180 out.go:97] [download-only-20220531101206-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:12:06.549347    2180 notify.go:193] Checking for updates...
	W0531 10:12:06.549383    2180 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 10:12:06.570117    2180 out.go:169] MINIKUBE_LOCATION=14079
	I0531 10:12:06.611898    2180 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:12:06.654056    2180 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:12:06.674980    2180 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:12:06.717039    2180 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	W0531 10:12:06.759116    2180 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 10:12:06.759361    2180 driver.go:358] Setting default libvirt URI to qemu:///system
	W0531 10:12:06.822141    2180 docker.go:113] docker version returned error: exit status 1
	I0531 10:12:06.842989    2180 out.go:97] Using the docker driver based on user configuration
	I0531 10:12:06.843024    2180 start.go:284] selected driver: docker
	I0531 10:12:06.843031    2180 start.go:806] validating driver "docker" against <nil>
	I0531 10:12:06.843156    2180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:12:06.963154    2180 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:12:06.984956    2180 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0531 10:12:07.006043    2180 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0531 10:12:07.047803    2180 out.go:169] 
	W0531 10:12:07.069117    2180 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0531 10:12:07.089974    2180 out.go:169] 
	I0531 10:12:07.131997    2180 out.go:169] 
	W0531 10:12:07.152987    2180 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0531 10:12:07.153092    2180 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0531 10:12:07.153130    2180 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0531 10:12:07.173874    2180 out.go:169] 
	I0531 10:12:07.195206    2180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:12:07.312902    2180 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0531 10:12:07.334069    2180 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0531 10:12:07.334139    2180 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 10:12:07.381036    2180 out.go:169] 
	W0531 10:12:07.402106    2180 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0531 10:12:07.402205    2180 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0531 10:12:07.402238    2180 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0531 10:12:07.422838    2180 out.go:169] 
	I0531 10:12:07.465053    2180 out.go:169] 
	W0531 10:12:07.485947    2180 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0531 10:12:07.506988    2180 out.go:169] 
	I0531 10:12:07.528050    2180 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0531 10:12:07.528162    2180 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 10:12:07.548863    2180 out.go:169] Using Docker Desktop driver with the root privilege
	I0531 10:12:07.570146    2180 cni.go:95] Creating CNI manager for ""
	I0531 10:12:07.570162    2180 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:12:07.570173    2180 start_flags.go:306] config:
	{Name:download-only-20220531101206-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220531101206-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:12:07.592046    2180 out.go:97] Starting control plane node download-only-20220531101206-2169 in cluster download-only-20220531101206-2169
	I0531 10:12:07.592095    2180 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:12:07.612922    2180 out.go:97] Pulling base image ...
	I0531 10:12:07.612966    2180 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:12:07.613016    2180 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:12:07.613137    2180 cache.go:107] acquiring lock: {Name:mk07cc7f7559770f9f4d7a752db1371d8c246008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.613149    2180 cache.go:107] acquiring lock: {Name:mk7721dc0f8846c87a2957669538a14b883ed0f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615284    2180 cache.go:107] acquiring lock: {Name:mk1982571e1cfc915b1b0606da07e9e7dee50b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.614119    2180 cache.go:107] acquiring lock: {Name:mk58ec9b5ab6f4533b99a7b2cbd004879a17496e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615354    2180 cache.go:107] acquiring lock: {Name:mk084d2c7ff0a676cf2ef8abe3dce83e5c2af9fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615426    2180 cache.go:107] acquiring lock: {Name:mk8302bc31bec15feb01e22a62259465caf91e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615433    2180 cache.go:107] acquiring lock: {Name:mk9ec793ffbb36e8c5fc6e211d63fae9f1734dcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615539    2180 cache.go:107] acquiring lock: {Name:mkf92b4eb4d6d3d26f878b89770a9e38ee1f3415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 10:12:07.615657    2180 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0531 10:12:07.615659    2180 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0531 10:12:07.615685    2180 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/download-only-20220531101206-2169/config.json ...
	I0531 10:12:07.615709    2180 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 10:12:07.615733    2180 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/download-only-20220531101206-2169/config.json: {Name:mke3dd6c821cc229b856487824738cac880151a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 10:12:07.615742    2180 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0531 10:12:07.615822    2180 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0531 10:12:07.616105    2180 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 10:12:07.616229    2180 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0531 10:12:07.616244    2180 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0531 10:12:07.616251    2180 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0531 10:12:07.616736    2180 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0531 10:12:07.616740    2180 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0531 10:12:07.616745    2180 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0531 10:12:07.622100    2180 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.624198    2180 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.624256    2180 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.624828    2180 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.625137    2180 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.625607    2180 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.625748    2180 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.626027    2180 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0531 10:12:07.677104    2180 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 10:12:07.677298    2180 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory
	I0531 10:12:07.677414    2180 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 10:12:08.244946    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0531 10:12:08.308567    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0531 10:12:08.366848    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0531 10:12:08.366983    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0531 10:12:08.366998    2180 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 753.809856ms
	I0531 10:12:08.367010    2180 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0531 10:12:08.367678    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0531 10:12:08.367681    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0531 10:12:08.368106    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0531 10:12:08.368130    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0531 10:12:08.424118    2180 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0531 10:12:10.571078    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 10:12:10.571094    2180 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.95801003s
	I0531 10:12:10.571105    2180 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 10:12:10.653185    2180 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0531 10:12:10.901861    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0531 10:12:10.901877    2180 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 3.288107004s
	I0531 10:12:10.901886    2180 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0531 10:12:12.099057    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0531 10:12:12.099077    2180 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.485911542s
	I0531 10:12:12.099087    2180 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0531 10:12:12.326925    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0531 10:12:12.326942    2180 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.711667749s
	I0531 10:12:12.326952    2180 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0531 10:12:12.686391    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0531 10:12:12.686407    2180 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 5.073342492s
	I0531 10:12:12.686416    2180 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0531 10:12:12.730533    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0531 10:12:12.730550    2180 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 5.116687698s
	I0531 10:12:12.730560    2180 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0531 10:12:14.010487    2180 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0531 10:12:14.010503    2180 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 6.395266545s
	I0531 10:12:14.010512    2180 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0531 10:12:14.010525    2180 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531101206-2169"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (6.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220531101206-2169 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220531101206-2169 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker : (6.805175579s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (6.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220531101206-2169
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220531101206-2169: exit status 85 (289.172422ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 10:12:23
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 10:12:23.173370    2233 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:12:23.173596    2233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:12:23.173601    2233 out.go:309] Setting ErrFile to fd 2...
	I0531 10:12:23.173608    2233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:12:23.173724    2233 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	W0531 10:12:23.173822    2233 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: no such file or directory
	I0531 10:12:23.173979    2233 out.go:303] Setting JSON to true
	I0531 10:12:23.189441    2233 start.go:115] hostinfo: {"hostname":"37309.local","uptime":712,"bootTime":1654016431,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:12:23.189567    2233 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:12:23.211474    2233 out.go:97] [download-only-20220531101206-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:12:23.211569    2233 notify.go:193] Checking for updates...
	W0531 10:12:23.211589    2233 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 10:12:23.232123    2233 out.go:169] MINIKUBE_LOCATION=14079
	I0531 10:12:23.253208    2233 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:12:23.274522    2233 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:12:23.296555    2233 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:12:23.318595    2233 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	W0531 10:12:23.361399    2233 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 10:12:23.362131    2233 config.go:178] Loaded profile config "download-only-20220531101206-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0531 10:12:23.362211    2233 start.go:714] api.Load failed for download-only-20220531101206-2169: filestore "download-only-20220531101206-2169": Docker machine "download-only-20220531101206-2169" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 10:12:23.362283    2233 driver.go:358] Setting default libvirt URI to qemu:///system
	W0531 10:12:23.362316    2233 start.go:714] api.Load failed for download-only-20220531101206-2169: filestore "download-only-20220531101206-2169": Docker machine "download-only-20220531101206-2169" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 10:12:23.432820    2233 docker.go:137] docker version: linux-20.10.14
	I0531 10:12:23.432923    2233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:12:23.555938    2233 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-05-31 17:12:23.49073601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:12:23.577670    2233 out.go:97] Using the docker driver based on existing profile
	I0531 10:12:23.577704    2233 start.go:284] selected driver: docker
	I0531 10:12:23.577714    2233 start.go:806] validating driver "docker" against &{Name:download-only-20220531101206-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220531101206-2169 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:12:23.578095    2233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:12:23.703836    2233 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-05-31 17:12:23.638243849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:12:23.705912    2233 cni.go:95] Creating CNI manager for ""
	I0531 10:12:23.705931    2233 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 10:12:23.705960    2233 start_flags.go:306] config:
	{Name:download-only-20220531101206-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220531101206-2169 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:12:23.727381    2233 out.go:97] Starting control plane node download-only-20220531101206-2169 in cluster download-only-20220531101206-2169
	I0531 10:12:23.727489    2233 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 10:12:23.749469    2233 out.go:97] Pulling base image ...
	I0531 10:12:23.749592    2233 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 10:12:23.749692    2233 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 10:12:23.815220    2233 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 10:12:23.815241    2233 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 10:12:23.815377    2233 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory
	I0531 10:12:23.815392    2233 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory, skipping pull
	I0531 10:12:23.815397    2233 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in cache, skipping pull
	I0531 10:12:23.815406    2233 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 as a tarball
	I0531 10:12:23.817990    2233 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 10:12:23.817998    2233 cache.go:57] Caching tarball of preloaded images
	I0531 10:12:23.818148    2233 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 10:12:23.839508    2233 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0531 10:12:23.839604    2233 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0531 10:12:23.936023    2233 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531101206-2169"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.74s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220531101206-2169
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.42s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220531101231-2169 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220531101231-2169 --force --alsologtostderr --driver=docker : (5.934985045s)
helpers_test.go:175: Cleaning up "download-docker-20220531101231-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220531101231-2169
--- PASS: TestDownloadOnlyKic (7.08s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220531101238-2169 --alsologtostderr --binary-mirror http://127.0.0.1:49608 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220531101238-2169 --alsologtostderr --binary-mirror http://127.0.0.1:49608 --driver=docker : (1.022027386s)
helpers_test.go:175: Cleaning up "binary-mirror-20220531101238-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220531101238-2169
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (51.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220531104925-2169 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220531104925-2169 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (48.481532401s)
helpers_test.go:175: Cleaning up "offline-docker-20220531104925-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220531104925-2169

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220531104925-2169: (3.109557496s)
--- PASS: TestOffline (51.59s)

                                                
                                    
x
+
TestAddons/Setup (87.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220531101240-2169 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220531101240-2169 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m27.929903649s)
--- PASS: TestAddons/Setup (87.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.46167ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-5fjrm" [f3294ece-b297-4b20-830d-1b7dc44ac3a2] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010840489s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220531101240-2169 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.36s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 12.359404ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-grhh2" [e292fb10-36b2-4d0d-a8f4-84242c4c1af4] Running
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01102183s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220531101240-2169 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220531101240-2169 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.856813604s)
addons_test.go:428: kubectl --context addons-20220531101240-2169 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 6.664178ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531101240-2169 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531101240-2169 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [427ce35c-1706-4713-9371-279e6193b976] Pending
helpers_test.go:342: "task-pv-pod" [427ce35c-1706-4713-9371-279e6193b976] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [427ce35c-1706-4713-9371-279e6193b976] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.009061383s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531101240-2169 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531101240-2169 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531101240-2169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [81bf53b5-dbfa-46f9-9826-6a4003f7dc5a] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [81bf53b5-dbfa-46f9-9826-6a4003f7dc5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [81bf53b5-dbfa-46f9-9826-6a4003f7dc5a] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.012602447s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.798498774s)
addons_test.go:592: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (14.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220531101240-2169 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c580615f-2400-49f4-a1ab-80bea0d5b709] Pending
helpers_test.go:342: "busybox" [c580615f-2400-49f4-a1ab-80bea0d5b709] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c580615f-2400-49f4-a1ab-80bea0d5b709] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.008278956s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220531101240-2169 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220531101240-2169 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220531101240-2169 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220531101240-2169 addons disable gcp-auth --alsologtostderr -v=1: (5.868698769s)
--- PASS: TestAddons/serial/GCPAuth (14.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220531101240-2169
addons_test.go:132: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220531101240-2169: (12.795455526s)
addons_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220531101240-2169
addons_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220531101240-2169
--- PASS: TestAddons/StoppedEnableDisable (13.18s)

                                                
                                    
x
+
TestCertOptions (30.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220531105047-2169 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220531105047-2169 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (26.431152564s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220531105047-2169 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220531105047-2169 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220531105047-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220531105047-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220531105047-2169: (2.85582542s)
--- PASS: TestCertOptions (30.33s)

                                                
                                    
x
+
TestCertExpiration (215.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220531105047-2169 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220531105047-2169 --memory=2048 --cert-expiration=3m --driver=docker : (26.894184983s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220531105047-2169 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220531105047-2169 --memory=2048 --cert-expiration=8760h --driver=docker : (5.527319265s)
helpers_test.go:175: Cleaning up "cert-expiration-20220531105047-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220531105047-2169
E0531 10:54:20.588783    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220531105047-2169: (2.875336257s)
--- PASS: TestCertExpiration (215.30s)

                                                
                                    
x
+
TestDockerFlags (28.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220531105018-2169 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220531105018-2169 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (24.309935493s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220531105018-2169 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220531105018-2169 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:175: Cleaning up "docker-flags-20220531105018-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220531105018-2169

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220531105018-2169: (2.938544391s)
--- PASS: TestDockerFlags (28.26s)

                                                
                                    
x
+
TestForceSystemdFlag (29.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220531105017-2169 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220531105017-2169 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (26.105685366s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220531105017-2169 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:175: Cleaning up "force-systemd-flag-20220531105017-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220531105017-2169

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220531105017-2169: (3.022686571s)
--- PASS: TestForceSystemdFlag (29.68s)

                                                
                                    
x
+
TestForceSystemdEnv (27.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220531104951-2169 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220531104951-2169 --memory=2048 --alsologtostderr -v=5 --driver=docker : (23.804071835s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220531104951-2169 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220531104951-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220531104951-2169

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220531104951-2169: (2.834405098s)
--- PASS: TestForceSystemdEnv (27.16s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.02s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.02s)

                                                
                                    
x
+
TestErrorSpam/setup (23.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220531101534-2169 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220531101534-2169 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 --driver=docker : (23.277887064s)
--- PASS: TestErrorSpam/setup (23.28s)

                                                
                                    
x
+
TestErrorSpam/start (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 start --dry-run
--- PASS: TestErrorSpam/start (2.18s)

                                                
                                    
x
+
TestErrorSpam/status (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 status
--- PASS: TestErrorSpam/status (1.31s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (13.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 stop: (12.475430379s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220531101534-2169 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220531101534-2169 stop
--- PASS: TestErrorSpam/stop (13.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/test/nested/copy/2169/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (41.429723243s)
--- PASS: TestFunctional/serial/StartWithProxy (41.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --alsologtostderr -v=8: (6.559193959s)
functional_test.go:655: soft start took 6.560741461s for "functional-20220531101620-2169" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220531101620-2169 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220531101620-2169 get po -A: (1.459217169s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:3.1: (1.244019231s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:3.3: (1.844547841s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add k8s.gcr.io/pause:latest: (1.813606193s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local624747717/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add minikube-local-cache-test:functional-20220531101620-2169
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache add minikube-local-cache-test:functional-20220531101620-2169: (1.306777224s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache delete minikube-local-cache-test:functional-20220531101620-2169
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220531101620-2169
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (429.076929ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 cache reload: (1.057598042s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 kubectl -- --context functional-20220531101620-2169 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220531101620-2169 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.597280789s)
functional_test.go:753: restart took 29.597584396s for "functional-20220531101620-2169" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220531101620-2169 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs: (3.20521625s)
--- PASS: TestFunctional/serial/LogsCmd (3.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1102632542/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1102632542/001/logs.txt: (3.185765657s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 config get cpus: exit status 14 (53.277599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 config get cpus: exit status 14 (50.934052ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (923.22798ms)

                                                
                                                
-- stdout --
	* [functional-20220531101620-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:18:57.460136    3995 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:18:57.460395    3995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:57.460401    3995 out.go:309] Setting ErrFile to fd 2...
	I0531 10:18:57.460406    3995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:57.460529    3995 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:18:57.460842    3995 out.go:303] Setting JSON to false
	I0531 10:18:57.479274    3995 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1106,"bootTime":1654016431,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:18:57.479388    3995 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:18:57.501503    3995 out.go:177] * [functional-20220531101620-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	I0531 10:18:57.522954    3995 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:18:57.565157    3995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:18:57.607098    3995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:18:57.648883    3995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:18:57.691296    3995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:18:57.713344    3995 config.go:178] Loaded profile config "functional-20220531101620-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:18:57.713708    3995 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:18:57.791157    3995 docker.go:137] docker version: linux-20.10.14
	I0531 10:18:57.791322    3995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:18:57.998773    3995 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:18:57.878599961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:18:58.081622    3995 out.go:177] * Using the docker driver based on existing profile
	I0531 10:18:58.103685    3995 start.go:284] selected driver: docker
	I0531 10:18:58.103713    3995 start.go:806] validating driver "docker" against &{Name:functional-20220531101620-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531101620-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:18:58.103925    3995 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:18:58.167647    3995 out.go:177] 
	W0531 10:18:58.209809    3995 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 10:18:58.252125    3995 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220531101620-2169 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (607.372989ms)

                                                
                                                
-- stdout --
	* [functional-20220531101620-2169] minikube v1.26.0-beta.1 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:18:47.651914    3788 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:18:47.652189    3788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:47.652199    3788 out.go:309] Setting ErrFile to fd 2...
	I0531 10:18:47.652207    3788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:18:47.652464    3788 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:18:47.652875    3788 out.go:303] Setting JSON to false
	I0531 10:18:47.668248    3788 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1096,"bootTime":1654016431,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0531 10:18:47.668345    3788 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 10:18:47.690618    3788 out.go:177] * [functional-20220531101620-2169] minikube v1.26.0-beta.1 sur Darwin 12.4
	I0531 10:18:47.733058    3788 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 10:18:47.774877    3788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 10:18:47.796264    3788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0531 10:18:47.817279    3788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 10:18:47.837889    3788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 10:18:47.860178    3788 config.go:178] Loaded profile config "functional-20220531101620-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:18:47.860821    3788 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 10:18:47.931653    3788 docker.go:137] docker version: linux-20.10.14
	I0531 10:18:47.931796    3788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 10:18:48.055688    3788 info.go:265] docker info: {ID:XCCD:OR4G:KEVC:WHGD:QIVO:NNA4:4PNA:G662:ML55:LM4H:W5O5:ZVYD Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-05-31 17:18:48.010460485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 10:18:48.077646    3788 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0531 10:18:48.099324    3788 start.go:284] selected driver: docker
	I0531 10:18:48.099353    3788 start.go:806] validating driver "docker" against &{Name:functional-20220531101620-2169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531101620-2169 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 10:18:48.099531    3788 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 10:18:48.124501    3788 out.go:177] 
	W0531 10:18:48.146790    3788 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 10:18:48.172180    3788 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220531101620-2169 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220531101620-2169 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-5vcng" [3822d325-a1bc-4556-90b5-018911163895] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-5vcng" [3822d325-a1bc-4556-90b5-018911163895] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.010164106s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 service list: (1.068789931s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 service --namespace=default --https --url hello-node: (2.026719472s)
functional_test.go:1475: found endpoint: https://127.0.0.1:52356
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 service hello-node --url --format={{.IP}}: (2.024214918s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 service hello-node --url: (2.025007599s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:52427
--- PASS: TestFunctional/parallel/ServiceCmd (14.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [dbba6bd2-fbe8-40a6-ae0f-497298314c97] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009966909s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220531101620-2169 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220531101620-2169 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220531101620-2169 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531101620-2169 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [05da27e7-7195-4136-a997-6c5cc0c97b7a] Pending
helpers_test.go:342: "sp-pod" [05da27e7-7195-4136-a997-6c5cc0c97b7a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [05da27e7-7195-4136-a997-6c5cc0c97b7a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.015015949s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220531101620-2169 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220531101620-2169 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531101620-2169 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [664e0b2d-cf4f-412d-af72-0617cd728ddb] Pending
helpers_test.go:342: "sp-pod" [664e0b2d-cf4f-412d-af72-0617cd728ddb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [664e0b2d-cf4f-412d-af72-0617cd728ddb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008175936s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220531101620-2169 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh -n functional-20220531101620-2169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 cp functional-20220531101620-2169:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3223459813/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh -n functional-20220531101620-2169 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220531101620-2169 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-7b49r" [cebc59b9-2998-4349-b58b-ea27611e866d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-7b49r" [cebc59b9-2998-4349-b58b-ea27611e866d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.043342349s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531101620-2169 exec mysql-b87c45988-7b49r -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531101620-2169 exec mysql-b87c45988-7b49r -- mysql -ppassword -e "show databases;": exit status 1 (127.063844ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531101620-2169 exec mysql-b87c45988-7b49r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2169/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /etc/test/nested/copy/2169/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2169.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /etc/ssl/certs/2169.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2169.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /usr/share/ca-certificates/2169.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/21692.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /etc/ssl/certs/21692.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/21692.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /usr/share/ca-certificates/21692.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220531101620-2169 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo systemctl is-active crio": exit status 1 (426.192609ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 version -o=json --components
functional_test.go:2196: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 version -o=json --components: (1.006413873s)
--- PASS: TestFunctional/parallel/Version/components (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220531101620-2169
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-20220531101620-2169 | eff5e0f1d46ad | 1.24MB |
| docker.io/library/nginx                     | alpine                         | b1c3acb288825 | 23.4MB |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | df7b72818ad2e | 125MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220531101620-2169 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest                         | 0e901e68141fd | 142MB  |
| docker.io/library/mysql                     | 5.7                            | 2a0961b7de03c | 462MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | 4c03754524064 | 112MB  |
| docker.io/kubernetesui/dashboard            | <none>                         | 7fff914c4a615 | 243MB  |
| docker.io/library/minikube-local-cache-test | functional-20220531101620-2169 | c03201a892d82 | 30B    |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|--------------------------------|---------------|--------|
E0531 10:19:08.499961    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.506884    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.517226    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.537480    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.577968    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.659672    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:08.820365    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:09.140506    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:09.781917    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:11.063662    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:13.623807    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:18.745978    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:28.988216    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:19:49.470236    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:20:30.430319    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 10:21:52.351626    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format json:
[{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"eff5e0f1d46ad3e73f22faabf5453ebd40a8dcd5f58814210001c5c2ab8c8b40","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220531101620-2169"],"size":"1240000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoD
igests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"c03201a892d825c4eaeddf26a6dded2384b8e461473b35aa2dc9f179f5006ad4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220531101620-2169"],"size":"30"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.i
o/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220531101620-2169"],"size":"32900000"},{"id":"0184
c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: c03201a892d825c4eaeddf26a6dded2384b8e461473b35aa2dc9f179f5006ad4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220531101620-2169
size: "30"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh pgrep buildkitd: exit status 1 (431.70212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image build -t localhost/my-image:functional-20220531101620-2169 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image build -t localhost/my-image:functional-20220531101620-2169 testdata/build: (3.061052302s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image build -t localhost/my-image:functional-20220531101620-2169 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6e12ca963a2c
Removing intermediate container 6e12ca963a2c
---> 87e83a853097
Step 3/3 : ADD content.txt /
---> eff5e0f1d46a
Successfully built eff5e0f1d46a
Successfully tagged localhost/my-image:functional-20220531101620-2169
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.160022131s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220531101620-2169 docker-env) && out/minikube-darwin-amd64 status -p functional-20220531101620-2169"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220531101620-2169 docker-env) && out/minikube-darwin-amd64 status -p functional-20220531101620-2169": (1.013104811s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220531101620-2169 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169: (3.041883307s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169: (2.053911335s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169: (3.918474179s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image save gcr.io/google-containers/addon-resizer:functional-20220531101620-2169 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image save gcr.io/google-containers/addon-resizer:functional-20220531101620-2169 /Users/jenkins/workspace/addon-resizer-save.tar: (1.777706442s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image rm gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.426981858s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220531101620-2169 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220531101620-2169: (2.297481276s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1310: Took "435.483612ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "72.70786ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "514.501424ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "148.623211ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220531101620-2169 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220531101620-2169 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [9f9f6948-3e7f-4efe-baf8-a6e229adfbeb] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [9f9f6948-3e7f-4efe-baf8-a6e229adfbeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [9f9f6948-3e7f-4efe-baf8-a6e229adfbeb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009591212s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531101620-2169 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220531101620-2169 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 3757: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port472018665/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654017528221361000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port472018665/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654017528221361000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port472018665/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654017528221361000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port472018665/001/test-1654017528221361000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (417.707024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 31 17:18 created-by-test
-rw-r--r-- 1 docker docker 24 May 31 17:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 31 17:18 test-1654017528221361000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh cat /mount-9p/test-1654017528221361000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220531101620-2169 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6] Pending
helpers_test.go:342: "busybox-mount" [f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [f0aea8bc-8bf5-4f8f-830f-bb35a2a3e2e6] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.009211733s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220531101620-2169 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port472018665/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2170175691/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (630.130064ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2170175691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh "sudo umount -f /mount-9p": exit status 1 (415.623987ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220531101620-2169 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220531101620-2169 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2170175691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220531101620-2169
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220531101620-2169
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220531101620-2169
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220531103123-2169 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220531103123-2169 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.061736698s)
--- PASS: TestJSONOutput/start/Command (36.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220531103123-2169 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220531103123-2169 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220531103123-2169 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220531103123-2169 --output=json --user=testUser: (12.441140749s)
--- PASS: TestJSONOutput/stop/Command (12.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220531103215-2169 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220531103215-2169 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (330.324618ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ef0b1db-8ca9-4e27-9799-1fe52fa8a1cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220531103215-2169] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5a37592-69c9-4a48-ba32-3cd9838e756b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"3493256a-5db7-406b-bcee-9cab6a287287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig"}}
	{"specversion":"1.0","id":"1fe162ee-8ed1-4f59-8141-db82d278663e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"fb371e77-fd9a-408e-ac8d-c3ffda4d2566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"44d98519-86f6-415b-bb5e-ca935e89a24a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube"}}
	{"specversion":"1.0","id":"19305a35-d000-4d46-a66a-e224ed334bab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220531103215-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220531103215-2169
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220531103216-2169 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220531103216-2169 --network=: (22.917435711s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220531103216-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220531103216-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220531103216-2169: (2.725659753s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220531103242-2169 --network=bridge
E0531 10:33:03.104458    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220531103242-2169 --network=bridge: (23.551309117s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220531103242-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220531103242-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220531103242-2169: (2.586200499s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.20s)

                                                
                                    
x
+
TestKicExistingNetwork (27.19s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220531103308-2169 --network=existing-network
E0531 10:33:30.800809    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220531103308-2169 --network=existing-network: (24.092535476s)
helpers_test.go:175: Cleaning up "existing-network-20220531103308-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220531103308-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220531103308-2169: (2.698544331s)
--- PASS: TestKicExistingNetwork (27.19s)

                                                
                                    
x
+
TestKicCustomSubnet (26.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220531103335-2169 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220531103335-2169 --subnet=192.168.60.0/24: (23.5175281s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220531103335-2169 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220531103335-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220531103335-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220531103335-2169: (2.711466804s)
--- PASS: TestKicCustomSubnet (26.30s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (54.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:42: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220531103402-2169
E0531 10:34:08.565552    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
minikube_profile_test.go:42: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220531103402-2169: (23.365919805s)
minikube_profile_test.go:42: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220531103402-2169
minikube_profile_test.go:42: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220531103402-2169: (23.972483201s)
minikube_profile_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220531103402-2169
minikube_profile_test.go:53: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220531103402-2169
minikube_profile_test.go:53: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220531103402-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220531103402-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220531103402-2169: (2.846952057s)
helpers_test.go:175: Cleaning up "first-20220531103402-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220531103402-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220531103402-2169: (2.74730536s)
--- PASS: TestMinikubeProfile (54.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220531103457-2169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220531103457-2169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (5.879579072s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220531103457-2169 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220531103457-2169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220531103457-2169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.114718894s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220531103457-2169 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220531103457-2169 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220531103457-2169 --alsologtostderr -v=5: (2.393469358s)
--- PASS: TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220531103457-2169 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220531103457-2169
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220531103457-2169: (1.617128972s)
--- PASS: TestMountStart/serial/Stop (1.62s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (4.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220531103457-2169
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220531103457-2169: (3.793010968s)
--- PASS: TestMountStart/serial/RestartStopped (4.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220531103457-2169 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0531 10:35:31.617704    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m9.656066646s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.680621966s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- rollout status deployment/busybox: (2.16452651s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-8hz46 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-zq596 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-8hz46 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-zq596 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-8hz46 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-zq596 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-8hz46 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-8hz46 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-zq596 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220531103524-2169 -- exec busybox-7978565885-zq596 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220531103524-2169 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220531103524-2169 -v 3 --alsologtostderr: (24.496964802s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr: (1.079608151s)
--- PASS: TestMultiNode/serial/AddNode (25.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --output json --alsologtostderr: (1.113104776s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp testdata/cp-test.txt multinode-20220531103524-2169:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2594249898/001/cp-test_multinode-20220531103524-2169.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169:/home/docker/cp-test.txt multinode-20220531103524-2169-m02:/home/docker/cp-test_multinode-20220531103524-2169_multinode-20220531103524-2169-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169_multinode-20220531103524-2169-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169:/home/docker/cp-test.txt multinode-20220531103524-2169-m03:/home/docker/cp-test_multinode-20220531103524-2169_multinode-20220531103524-2169-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169_multinode-20220531103524-2169-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp testdata/cp-test.txt multinode-20220531103524-2169-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2594249898/001/cp-test_multinode-20220531103524-2169-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m02:/home/docker/cp-test.txt multinode-20220531103524-2169:/home/docker/cp-test_multinode-20220531103524-2169-m02_multinode-20220531103524-2169.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169-m02_multinode-20220531103524-2169.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m02:/home/docker/cp-test.txt multinode-20220531103524-2169-m03:/home/docker/cp-test_multinode-20220531103524-2169-m02_multinode-20220531103524-2169-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169-m02_multinode-20220531103524-2169-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp testdata/cp-test.txt multinode-20220531103524-2169-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2594249898/001/cp-test_multinode-20220531103524-2169-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m03:/home/docker/cp-test.txt multinode-20220531103524-2169:/home/docker/cp-test_multinode-20220531103524-2169-m03_multinode-20220531103524-2169.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169-m03_multinode-20220531103524-2169.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 cp multinode-20220531103524-2169-m03:/home/docker/cp-test.txt multinode-20220531103524-2169-m02:/home/docker/cp-test_multinode-20220531103524-2169-m03_multinode-20220531103524-2169-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 ssh -n multinode-20220531103524-2169-m02 "sudo cat /home/docker/cp-test_multinode-20220531103524-2169-m03_multinode-20220531103524-2169-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node stop m03: (12.472301647s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status: exit status 7 (818.929875ms)

                                                
                                                
-- stdout --
	multinode-20220531103524-2169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531103524-2169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531103524-2169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr: exit status 7 (822.225959ms)

                                                
                                                
-- stdout --
	multinode-20220531103524-2169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531103524-2169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531103524-2169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:37:36.359663    6847 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:37:36.359839    6847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:37:36.359845    6847 out.go:309] Setting ErrFile to fd 2...
	I0531 10:37:36.359848    6847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:37:36.359965    6847 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:37:36.360131    6847 out.go:303] Setting JSON to false
	I0531 10:37:36.360145    6847 mustload.go:65] Loading cluster: multinode-20220531103524-2169
	I0531 10:37:36.360421    6847 config.go:178] Loaded profile config "multinode-20220531103524-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:37:36.360434    6847 status.go:253] checking status of multinode-20220531103524-2169 ...
	I0531 10:37:36.360783    6847 cli_runner.go:164] Run: docker container inspect multinode-20220531103524-2169 --format={{.State.Status}}
	I0531 10:37:36.430375    6847 status.go:328] multinode-20220531103524-2169 host status = "Running" (err=<nil>)
	I0531 10:37:36.430407    6847 host.go:66] Checking if "multinode-20220531103524-2169" exists ...
	I0531 10:37:36.430664    6847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531103524-2169
	I0531 10:37:36.501113    6847 host.go:66] Checking if "multinode-20220531103524-2169" exists ...
	I0531 10:37:36.501386    6847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:37:36.501435    6847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531103524-2169
	I0531 10:37:36.570684    6847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55842 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/multinode-20220531103524-2169/id_rsa Username:docker}
	I0531 10:37:36.650717    6847 ssh_runner.go:195] Run: systemctl --version
	I0531 10:37:36.654975    6847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:37:36.663682    6847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220531103524-2169
	I0531 10:37:36.734095    6847 kubeconfig.go:92] found "multinode-20220531103524-2169" server: "https://127.0.0.1:55846"
	I0531 10:37:36.734120    6847 api_server.go:165] Checking apiserver status ...
	I0531 10:37:36.734160    6847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 10:37:36.743915    6847 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1570/cgroup
	W0531 10:37:36.751607    6847 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1570/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 10:37:36.751619    6847 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55846/healthz ...
	I0531 10:37:36.756871    6847 api_server.go:266] https://127.0.0.1:55846/healthz returned 200:
	ok
	I0531 10:37:36.756886    6847 status.go:419] multinode-20220531103524-2169 apiserver status = Running (err=<nil>)
	I0531 10:37:36.756893    6847 status.go:255] multinode-20220531103524-2169 status: &{Name:multinode-20220531103524-2169 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 10:37:36.756908    6847 status.go:253] checking status of multinode-20220531103524-2169-m02 ...
	I0531 10:37:36.757130    6847 cli_runner.go:164] Run: docker container inspect multinode-20220531103524-2169-m02 --format={{.State.Status}}
	I0531 10:37:36.827957    6847 status.go:328] multinode-20220531103524-2169-m02 host status = "Running" (err=<nil>)
	I0531 10:37:36.827976    6847 host.go:66] Checking if "multinode-20220531103524-2169-m02" exists ...
	I0531 10:37:36.828232    6847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531103524-2169-m02
	I0531 10:37:36.898043    6847 host.go:66] Checking if "multinode-20220531103524-2169-m02" exists ...
	I0531 10:37:36.898327    6847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 10:37:36.898374    6847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531103524-2169-m02
	I0531 10:37:36.968862    6847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/multinode-20220531103524-2169-m02/id_rsa Username:docker}
	I0531 10:37:37.049918    6847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 10:37:37.059467    6847 status.go:255] multinode-20220531103524-2169-m02 status: &{Name:multinode-20220531103524-2169-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 10:37:37.059501    6847 status.go:253] checking status of multinode-20220531103524-2169-m03 ...
	I0531 10:37:37.059736    6847 cli_runner.go:164] Run: docker container inspect multinode-20220531103524-2169-m03 --format={{.State.Status}}
	I0531 10:37:37.131345    6847 status.go:328] multinode-20220531103524-2169-m03 host status = "Stopped" (err=<nil>)
	I0531 10:37:37.131362    6847 status.go:341] host is not running, skipping remaining checks
	I0531 10:37:37.131369    6847 status.go:255] multinode-20220531103524-2169-m03 status: &{Name:multinode-20220531103524-2169-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node start m03 --alsologtostderr: (23.985237577s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status: (1.083336164s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220531103524-2169
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220531103524-2169
E0531 10:38:03.102426    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220531103524-2169: (36.995427368s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true -v=8 --alsologtostderr
E0531 10:39:08.562311    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true -v=8 --alsologtostderr: (1m22.064164605s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220531103524-2169
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 node delete m03: (16.517653303s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.482387405s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 stop: (24.797444246s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status: exit status 7 (174.765541ms)

                                                
                                                
-- stdout --
	multinode-20220531103524-2169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531103524-2169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr: exit status 7 (175.45422ms)

                                                
                                                
-- stdout --
	multinode-20220531103524-2169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531103524-2169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 10:40:45.365774    7342 out.go:296] Setting OutFile to fd 1 ...
	I0531 10:40:45.365905    7342 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:40:45.365909    7342 out.go:309] Setting ErrFile to fd 2...
	I0531 10:40:45.365913    7342 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 10:40:45.366005    7342 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 10:40:45.366171    7342 out.go:303] Setting JSON to false
	I0531 10:40:45.366185    7342 mustload.go:65] Loading cluster: multinode-20220531103524-2169
	I0531 10:40:45.366476    7342 config.go:178] Loaded profile config "multinode-20220531103524-2169": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 10:40:45.366488    7342 status.go:253] checking status of multinode-20220531103524-2169 ...
	I0531 10:40:45.366833    7342 cli_runner.go:164] Run: docker container inspect multinode-20220531103524-2169 --format={{.State.Status}}
	I0531 10:40:45.430025    7342 status.go:328] multinode-20220531103524-2169 host status = "Stopped" (err=<nil>)
	I0531 10:40:45.430047    7342 status.go:341] host is not running, skipping remaining checks
	I0531 10:40:45.430053    7342 status.go:255] multinode-20220531103524-2169 status: &{Name:multinode-20220531103524-2169 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 10:40:45.430093    7342 status.go:253] checking status of multinode-20220531103524-2169-m02 ...
	I0531 10:40:45.430356    7342 cli_runner.go:164] Run: docker container inspect multinode-20220531103524-2169-m02 --format={{.State.Status}}
	I0531 10:40:45.492529    7342 status.go:328] multinode-20220531103524-2169-m02 host status = "Stopped" (err=<nil>)
	I0531 10:40:45.492547    7342 status.go:341] host is not running, skipping remaining checks
	I0531 10:40:45.492555    7342 status.go:255] multinode-20220531103524-2169-m02 status: &{Name:multinode-20220531103524-2169-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220531103524-2169 --wait=true -v=8 --alsologtostderr --driver=docker : (54.692168029s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220531103524-2169 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.492915914s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220531103524-2169
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220531103524-2169-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220531103524-2169-m02 --driver=docker : exit status 14 (494.421498ms)

                                                
                                                
-- stdout --
	* [multinode-20220531103524-2169-m02] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220531103524-2169-m02' is duplicated with machine name 'multinode-20220531103524-2169-m02' in profile 'multinode-20220531103524-2169'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220531103524-2169-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220531103524-2169-m03 --driver=docker : (22.903506436s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220531103524-2169
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220531103524-2169: exit status 80 (523.702491ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220531103524-2169
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220531103524-2169-m03 already exists in multinode-20220531103524-2169-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220531103524-2169-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220531103524-2169-m03: (2.922389456s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.89s)

                                                
                                    
x
+
TestScheduledStopUnix (97.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220531104639-2169 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220531104639-2169 --memory=2048 --driver=docker : (23.434512036s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220531104639-2169 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220531104639-2169 -n scheduled-stop-20220531104639-2169
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220531104639-2169 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220531104639-2169 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220531104639-2169 -n scheduled-stop-20220531104639-2169
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220531104639-2169
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220531104639-2169 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0531 10:48:03.102794    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220531104639-2169
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220531104639-2169: exit status 7 (115.663492ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220531104639-2169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220531104639-2169 -n scheduled-stop-20220531104639-2169
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220531104639-2169 -n scheduled-stop-20220531104639-2169: exit status 7 (113.359492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220531104639-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220531104639-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220531104639-2169: (2.419624589s)
--- PASS: TestScheduledStopUnix (97.80s)

                                                
                                    
x
+
TestSkaffold (55.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe493292373 version
skaffold_test.go:63: skaffold version: v1.38.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220531104817-2169 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220531104817-2169 --memory=2600 --driver=docker : (23.160403594s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe493292373 run --minikube-profile skaffold-20220531104817-2169 --kube-context skaffold-20220531104817-2169 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe493292373 run --minikube-profile skaffold-20220531104817-2169 --kube-context skaffold-20220531104817-2169 --status-check=true --port-forward=false --interactive=false: (17.834678143s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6f98b98b76-r6m4m" [d74a0c01-7986-4669-9789-09d5f5fa2387] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011699473s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5f9868456d-x7hz5" [60c1df39-6790-441e-81ed-e9a4ab4df611] Running
E0531 10:49:08.561057    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006994032s
helpers_test.go:175: Cleaning up "skaffold-20220531104817-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220531104817-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220531104817-2169: (3.097666229s)
--- PASS: TestSkaffold (55.84s)

                                                
                                    
x
+
TestInsufficientStorage (12.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220531104913-2169 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220531104913-2169 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.271260641s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b65ac989-d667-47b9-8714-c63166ef68aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220531104913-2169] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"47ff79f1-6adf-4f1a-840a-68ee7551685e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"2af21fa9-3f7f-4859-8d1e-4687e543f6e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig"}}
	{"specversion":"1.0","id":"5f782b2a-ad99-4114-8b5e-e813113999f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a86a6966-3562-4ea0-b32d-3c05832d8e77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"34a56457-84dc-44a7-8630-f2ab20e8befa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube"}}
	{"specversion":"1.0","id":"61b0fe1d-3cd7-41e8-93d1-06aa0ca7872e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"07b77fd6-1642-48b4-a432-90dfd7f4c086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b50dbf24-4faf-4a3f-ad3a-c2878775ef50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dbb750c4-9730-4641-9ed6-d2dfab5fe363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"5bcd704b-67a9-4658-b2ac-36a19a630f22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220531104913-2169 in cluster insufficient-storage-20220531104913-2169","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba93f086-a9a8-408d-842b-ac0c4dbdad82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"469cc6f3-a65e-4a04-a867-c72cfcca3200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a3071df-cf8d-4aa1-8073-02174f6e69c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220531104913-2169 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220531104913-2169 --output=json --layout=cluster: exit status 7 (477.510635ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531104913-2169","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531104913-2169","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:49:22.914147    8419 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531104913-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220531104913-2169 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220531104913-2169 --output=json --layout=cluster: exit status 7 (413.004818ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531104913-2169","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531104913-2169","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 10:49:23.327513    8429 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531104913-2169" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	E0531 10:49:23.335799    8429 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/insufficient-storage-20220531104913-2169/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220531104913-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220531104913-2169
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220531104913-2169: (2.485359266s)
--- PASS: TestInsufficientStorage (12.65s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current89304940/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current89304940/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current89304940/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current89304940/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3665358751/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3665358751/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3665358751/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3665358751/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220531105422-2169
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220531105422-2169: (3.695793217s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.70s)

                                                
                                    
x
+
TestPause/serial/Start (38.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220531105516-2169 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0531 10:55:22.028753    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220531105516-2169 --memory=2048 --install-addons=false --wait=all --driver=docker : (38.455599516s)
--- PASS: TestPause/serial/Start (38.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220531105516-2169 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220531105516-2169 --alsologtostderr -v=1 --driver=docker : (6.214689436s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220531105516-2169 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (391.51414ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220531105707-2169] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --driver=docker : (26.118350321s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220531105707-2169 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --driver=docker : (14.009886804s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220531105707-2169 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220531105707-2169 status -o json: exit status 2 (465.466724ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220531105707-2169","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220531105707-2169
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220531105707-2169: (2.769422082s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --no-kubernetes --driver=docker : (8.238106579s)
--- PASS: TestNoKubernetes/serial/Start (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220531105707-2169 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220531105707-2169 "sudo systemctl is-active --quiet service kubelet": exit status 1 (443.650255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220531105707-2169

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220531105707-2169: (1.955984212s)
--- PASS: TestNoKubernetes/serial/Stop (1.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0531 10:58:03.065713    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (42.12485895s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220531105707-2169 --driver=docker : (4.356635211s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220531105707-2169 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220531105707-2169 "sudo systemctl is-active --quiet service kubelet": exit status 1 (458.924824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220531104926-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220531104926-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (47.356627696s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220531104925-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml: (2.171570232s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-hs7kv" [c09b078f-9df3-4444-947e-2468c9cc5b32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-hs7kv" [c09b078f-9df3-4444-947e-2468c9cc5b32] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00697434s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531104925-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.122602855s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-n9pks" [5ebdb2bb-17cb-4082-bb06-9e634ee9924d] Running
E0531 10:59:00.037570    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015862482s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220531104926-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220531104926-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220531104926-2169 replace --force -f testdata/netcat-deployment.yaml: (1.726977711s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-749zs" [7eeeb444-3ca3-4266-9bc7-2663c9394619] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-668db85669-749zs" [7eeeb444-3ca3-4266-9bc7-2663c9394619] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0531 10:59:08.524472    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-749zs" [7eeeb444-3ca3-4266-9bc7-2663c9394619] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.009021315s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (69.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220531104927-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220531104927-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m9.205289594s)
--- PASS: TestNetworkPlugins/group/cilium/Start (69.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220531104926-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220531104926-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220531104926-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220531104927-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0531 10:59:27.788586    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220531104927-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m7.762837053s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-6qz8t" [702b8b4c-fa2f-4e9f-8618-253aea47b30a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015893815s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220531104927-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220531104927-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220531104927-2169 replace --force -f testdata/netcat-deployment.yaml: (2.286185557s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-jzspb" [1b9eb7e3-be8f-4b00-91ce-01d67bac632c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-jzspb" [1b9eb7e3-be8f-4b00-91ce-01d67bac632c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.012664485s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-75cqp" [7271ac2a-8f03-4266-8d02-2bdff5052cb4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015443288s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220531104927-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220531104927-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220531104927-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220531104927-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220531104927-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220531104927-2169 replace --force -f testdata/netcat-deployment.yaml: (1.665688961s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-vkcgb" [a0634d33-2dab-4f56-a78a-05b09a0d817e] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-668db85669-vkcgb" [a0634d33-2dab-4f56-a78a-05b09a0d817e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-vkcgb" [a0634d33-2dab-4f56-a78a-05b09a0d817e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006383727s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (79.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220531104926-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220531104926-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m19.238555867s)
--- PASS: TestNetworkPlugins/group/false/Start (79.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220531104927-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220531104927-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220531104927-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0531 11:01:06.123945    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (40.760136585s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220531104925-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml: (1.634487316s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-t57zt" [2c63c4ed-d221-48f2-bf95-a89f1c24f75d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-t57zt" [2c63c4ed-d221-48f2-bf95-a89f1c24f75d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009295201s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531104925-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (39.481606541s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220531104926-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220531104926-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220531104926-2169 replace --force -f testdata/netcat-deployment.yaml: (1.615590327s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-45b6j" [936fac2a-8fca-4017-a3a7-750646362fd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-45b6j" [936fac2a-8fca-4017-a3a7-750646362fd0] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.007740685s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220531104926-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220531104926-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220531104926-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220531104926-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112237226s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (78.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220531104925-2169 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (1m18.177238693s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (78.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220531104925-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml: (1.669096633s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-x5gpb" [238f2c7f-bd43-439d-8d9b-e5f31832c75c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-x5gpb" [238f2c7f-bd43-439d-8d9b-e5f31832c75c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009726397s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531104925-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220531104925-2169 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220531104925-2169 replace --force -f testdata/netcat-deployment.yaml: (1.612690505s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-p9mq6" [6eecd003-2e50-40da-8b20-e67f96bd173d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-p9mq6" [6eecd003-2e50-40da-8b20-e67f96bd173d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.009432746s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220531104925-2169 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220531104925-2169 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0531 11:29:39.832008    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220531110349-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0531 11:03:51.501639    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:03:56.621990    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:03:57.927441    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:57.933333    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:57.943568    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:57.963683    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:58.003838    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:58.084054    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:58.244171    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:58.564346    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:03:59.204543    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:04:00.033736    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 11:04:00.484697    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:04:03.044767    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:04:06.861989    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:04:08.164822    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:04:08.520814    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 11:04:18.404810    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:04:27.344056    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220531110349-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (49.219714552s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 create -f testdata/busybox.yaml
E0531 11:04:38.885147    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220531110349-2169 create -f testdata/busybox.yaml: (1.573662631s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f42bb1e0-7506-4370-9795-edbd816843cb] Pending
helpers_test.go:342: "busybox" [f42bb1e0-7506-4370-9795-edbd816843cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f42bb1e0-7506-4370-9795-edbd816843cb] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015976111s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220531110349-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220531110349-2169 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220531110349-2169 --alsologtostderr -v=3: (12.555875983s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169: exit status 7 (115.597902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220531110349-2169 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (358.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220531110349-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0531 11:05:08.305922    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:05:14.686506    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:14.692570    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:14.702810    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:14.724326    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:14.764465    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:14.844753    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:15.004859    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:15.325498    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:15.966040    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:17.247771    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:19.808578    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:19.845171    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:05:24.928825    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.433888    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.439919    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.450217    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.472083    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.513691    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.594198    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:28.754365    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:29.074657    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:29.714938    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:30.995265    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:33.557560    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:35.169262    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:38.677883    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:48.917967    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:05:55.650423    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:06:09.398042    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:06:30.225271    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:33.978952    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:33.984699    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:33.994874    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:34.015161    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:34.057177    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:34.139245    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:34.299838    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:34.620004    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:35.261568    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:36.541770    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:36.610157    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:06:39.102594    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:41.764406    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:06:44.224462    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory
E0531 11:06:50.359781    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220531110349-2169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (5m57.664586391s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220531110349-2169 -n no-preload-20220531110349-2169
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (358.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220531110241-2169 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220531110241-2169 --alsologtostderr -v=3: (1.623612693s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220531110241-2169 -n old-k8s-version-20220531110241-2169: exit status 7 (117.083553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220531110241-2169 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wzj5d" [1e85297b-8675-49ff-bed8-a051aa621a28] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wzj5d" [1e85297b-8675-49ff-bed8-a051aa621a28] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.013945735s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wzj5d" [1e85297b-8675-49ff-bed8-a051aa621a28] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00759488s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220531110349-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0531 11:11:19.584179    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
start_stop_delete_test.go:293: (dbg) Done: kubectl --context no-preload-20220531110349-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.613958082s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220531110349-2169 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (38.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220531111208-2169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0531 11:12:24.878281    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/false-20220531104926-2169/client.crt: no such file or directory
E0531 11:12:27.917930    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220531111208-2169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (38.856624038s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (38.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context embed-certs-20220531111208-2169 create -f testdata/busybox.yaml: (1.616624351s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e01bbb82-9891-43ac-95a4-73d55643fde2] Pending
helpers_test.go:342: "busybox" [e01bbb82-9891-43ac-95a4-73d55643fde2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e01bbb82-9891-43ac-95a4-73d55643fde2] Running
E0531 11:12:55.605984    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531104925-2169/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.012054083s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220531111208-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220531111208-2169 --alsologtostderr -v=3
E0531 11:13:03.055039    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531101620-2169/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220531111208-2169 --alsologtostderr -v=3: (12.584593065s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169: exit status 7 (118.240991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220531111208-2169 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220531111208-2169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0531 11:13:35.733390    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:13:46.307252    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
E0531 11:13:57.921153    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
E0531 11:14:00.026459    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/skaffold-20220531104817-2169/client.crt: no such file or directory
E0531 11:14:03.424476    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kubenet-20220531104925-2169/client.crt: no such file or directory
E0531 11:14:08.514292    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531101240-2169/client.crt: no such file or directory
E0531 11:14:39.835475    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:39.841960    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:39.853939    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:39.876156    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:39.918371    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:39.998543    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:40.159484    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:40.481408    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:41.122116    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:42.402438    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:44.962576    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:14:50.082761    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:15:00.324996    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:15:14.681038    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531104927-2169/client.crt: no such file or directory
E0531 11:15:20.805419    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory
E0531 11:15:28.428593    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory
E0531 11:16:01.767155    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531110349-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220531111208-2169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (5m31.855560826s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220531111208-2169 -n embed-certs-20220531111208-2169
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-fbr7m" [60efb3f7-78cf-4254-94bf-7e679c8cd8f9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0531 11:18:46.310519    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531104925-2169/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-8469778f77-fbr7m" [60efb3f7-78cf-4254-94bf-7e679c8cd8f9] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.013244669s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-fbr7m" [60efb3f7-78cf-4254-94bf-7e679c8cd8f9] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008057868s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220531111208-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0531 11:18:57.924178    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531104926-2169/client.crt: no such file or directory
start_stop_delete_test.go:293: (dbg) Done: kubectl --context embed-certs-20220531111208-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.588031754s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220531111208-2169 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (40.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220531111947-2169 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220531111947-2169 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (40.349667759s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (40.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 create -f testdata/busybox.yaml
E0531 11:20:28.430279    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531104927-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Done: kubectl --context default-k8s-different-port-20220531111947-2169 create -f testdata/busybox.yaml: (1.549047682s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [63ffdd07-e640-4416-8e26-6533994d7af2] Pending
helpers_test.go:342: "busybox" [63ffdd07-e640-4416-8e26-6533994d7af2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [63ffdd07-e640-4416-8e26-6533994d7af2] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.015834675s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220531111947-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220531111947-2169 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220531111947-2169 --alsologtostderr -v=3: (12.530393767s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169: exit status 7 (116.187432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220531111947-2169 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (333.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220531111947-2169 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220531111947-2169 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (5m33.016656282s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531111947-2169 -n default-k8s-different-port-20220531111947-2169
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (333.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8kcn7" [b45cc5ed-1c03-4907-8209-1b9fa4dc5f17] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8kcn7" [b45cc5ed-1c03-4907-8209-1b9fa4dc5f17] Running
E0531 11:26:33.970985    2169 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1018-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531104925-2169/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.013935915s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8kcn7" [b45cc5ed-1c03-4907-8209-1b9fa4dc5f17] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009079692s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220531111947-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context default-k8s-different-port-20220531111947-2169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.565473924s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220531111947-2169 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220531112729-2169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220531112729-2169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (37.137523126s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220531112729-2169 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220531112729-2169 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220531112729-2169 --alsologtostderr -v=3: (12.734425196s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169: exit status 7 (117.593693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220531112729-2169 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220531112729-2169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220531112729-2169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (17.300006793s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220531112729-2169 -n newest-cni-20220531112729-2169
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220531112729-2169 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    

Test skip (18/288)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 12.414101ms
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-qrpzd" [5df2a007-874a-4f2f-aa2c-5f15aadee702] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012311514s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-5sgrf" [21e86367-7cc4-47d0-8d11-0cc1306c90ac] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008617374s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220531101240-2169 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220531101240-2169 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220531101240-2169 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.846906496s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.93s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220531101240-2169 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220531101240-2169 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220531101240-2169 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [7942a70d-9a34-43e8-999c-3127aa398f06] Pending
helpers_test.go:342: "nginx" [7942a70d-9a34-43e8-999c-3127aa398f06] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [7942a70d-9a34-43e8-999c-3127aa398f06] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009237343s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220531101240-2169 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.56s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220531101620-2169 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220531101620-2169 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-7kxtc" [e9ec2cd3-35e2-43ae-a55e-66d36c36ca29] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-7kxtc" [e9ec2cd3-35e2-43ae-a55e-66d36c36ca29] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.012201974s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220531104925-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220531104925-2169
--- SKIP: TestNetworkPlugins/group/flannel (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220531104926-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220531104926-2169
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220531111946-2169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220531111946-2169
--- SKIP: TestStartStop/group/disable-driver-mounts (0.56s)

                                                
                                    
Copied to clipboard